This is my second dev.to post. If you missed the first one about building the auth system, check it out here.
Setting up fake worker failed: "Cannot read properties of undefined (reading 'WorkerMessageHandler')"
That was the error that greeted me when I tried to parse my first PDF. I'd installed pdfjs-dist, copied some code from a tutorial, and expected it to just work.
It did not just work.
This is the story of how I built the AI-powered resume analyzer for Carific.ai - and every rabbit hole I fell into along the way.
The entire codebase is MIT licensed. Every line of code in this post is live in the repo. Fork it, learn from it, improve it.
The Mission
I wanted users to:
- 📄 Upload a PDF resume or paste text directly
- 📝 Paste a job description they're applying for
- 🤖 Get AI-powered feedback on how to improve their resume for that specific role
- ⚡ See results stream in real-time - no waiting for a full response
Sounds straightforward, right? Spoiler: PDF.js had other plans.
The Stack (From package.json)
{
"next": "^16.0.7",
"react": "^19.2.1",
"ai": "^5.0.108",
"pdfjs-dist": "^5.4.449",
"zod": "^4.1.13"
}
Yes, Zod v4. The one with the completely different API that broke all my validation code. More on that later.
Chapter 1: The PDF.js Worker Nightmare
I installed pdfjs-dist, copied some code from a tutorial, and tried to parse a PDF:
// ❌ This doesn't work in Next.js
import { getDocument } from "pdfjs-dist";
const pdfDoc = await getDocument({ data: arrayBuffer }).promise;
The error: Setting up fake worker failed: "Cannot read properties of undefined (reading 'WorkerMessageHandler')"
I tried setting up the worker the "normal" way:
// ❌ Also doesn't work
import { GlobalWorkerOptions } from "pdfjs-dist";
GlobalWorkerOptions.workerSrc = "/pdf.worker.min.js";
Still broken. The worker file wasn't being served correctly by Next.js.
The Legacy Build Saves the Day
After plenty of Googling, I found the magic combination:
// ✅ The fix that actually works
import {
GlobalWorkerOptions,
getDocument,
} from "pdfjs-dist/legacy/build/pdf.mjs";
GlobalWorkerOptions.workerSrc = new URL(
"pdfjs-dist/legacy/build/pdf.worker.min.mjs",
import.meta.url
).toString();
The key insights:
- Use the legacy build (
pdfjs-dist/legacy/build/pdf.mjs) - it has better compatibility with bundlers - Use
import.meta.urlto resolve the worker path - this works with Next.js's bundling - Use the
.mjsextension - ESM modules play nicer with Next.js
One hour of Googling for three lines of code. Classic.
The Production Gotcha
A code review caught something I missed:
// ❌ This will fail in production
const standardFontDataUrl = "node_modules/pdfjs-dist/standard_fonts/";
node_modules doesn't exist in production deployments. The fix? I just removed the function that used it - it wasn't needed for my use case. Sometimes the best code is no code.
Chapter 2: Zod v4 Broke Everything
The Validation That Wasn't Validating
I added input validation to my API route. Copied the pattern from a tutorial:
// ❌ This doesn't work in Zod v4
const ResumeAnalysisSchema = z.object({
resumeText: z
.string({ required_error: "Resume is required" })
.min(50, "Resume must be at least 50 characters"),
});
TypeScript screamed at me:
error TS2769: No overload matches this call.
Object literal may only specify known properties, and 'required_error' does not exist
The Zod v4 API Changes
Turns out Zod v4 changed the API. required_error is gone. So is the string shorthand for error messages:
// ✅ Zod v4 syntax
const ResumeAnalysisSchema = z.object({
resumeText: z
.string({ error: "Resume is required" })
.min(50, { error: "Resume must be at least 50 characters" }),
});
And accessing validation errors? Also different:
// ❌ Zod v3
validation.error.errors[0].message;
// ✅ Zod v4
validation.error.issues[0].message;
30 minutes of debugging because I copied code from an outdated tutorial. Lesson learned: always check the version.
Single Source of Truth
A code review caught another issue - I had the same validation limits hardcoded in two places:
// ❌ Duplication that will drift
const schema = z.string().min(50).max(50000);
export const RESUME_MAX_LENGTH = 50000; // Will someone remember to update both?
The fix - constants as the single source of truth:
// ✅ lib/validations/resume-analysis.ts
export const RESUME_MIN_LENGTH = 50;
export const RESUME_MAX_LENGTH = 50_000;
export const ResumeAnalysisSchema = z.object({
resumeText: z
.string({ error: "Resume is required" })
.min(RESUME_MIN_LENGTH, {
error: `Resume must be at least ${RESUME_MIN_LENGTH} characters`,
})
.max(RESUME_MAX_LENGTH, {
error: `Resume exceeds maximum length of ${RESUME_MAX_LENGTH.toLocaleString()} characters`,
}),
});
Now changing a limit is a one-line change, and the error messages update automatically.
Chapter 3: Streaming AI Responses
The AI SDK Makes It Easy (Finally, Something That Just Works)
After the PDF.js and Zod nightmares, the AI integration was refreshingly simple:
// lib/ai/resume-analyzer.ts
import { streamText } from "ai";
const MODEL = "google/gemini-2.5-flash-lite";
export function analyzeResume({ resumeText, jobDescription }) {
const userMessage = `Analyze this resume against the job description...`;
return streamText({
model: MODEL,
system: RESUME_ANALYSIS_SYSTEM_PROMPT,
messages: [{ role: "user", content: userMessage }],
});
}
The API route streams the response directly:
// app/api/analyze-resume/route.ts
export async function POST(req: Request) {
// ... auth and validation ...
const result = analyzeResume({ resumeText, jobDescription });
return result.toTextStreamResponse();
}
Client-Side Streaming
Reading a streaming response on the client:
const response = await fetch("/api/analyze-resume", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ resumeText, jobDescription }),
});
const reader = response.body?.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value, { stream: true });
setAnalysisResult((prev) => prev + chunk);
}
Users see the AI response appear word by word. Much better UX than staring at a loading spinner for 10 seconds.
Chapter 4: The Little Bugs That Bite
Race Condition in File Upload
A code review caught a subtle bug - users could start multiple uploads simultaneously:
// ❌ Race condition waiting to happen
const handleDrop = (e: React.DragEvent) => {
if (disabled) return;
processFile(file);
};
If isProcessing is true, we should block new uploads:
// ✅ Guard against concurrent uploads
const processFile = async (file: File) => {
if (disabled || isProcessing) return;
// ...
};
const handleDrop = (e: React.DragEvent) => {
if (disabled || isProcessing) return;
// ...
};
Apply the same guard to drag handlers, file input, and the Card's disabled styling. Defensive programming saves debugging time later.
Browser Tab Freezing on Large PDFs
Another review comment - what happens if someone uploads a 500-page PDF?
// ❌ Will freeze the browser
for (let i = 1; i <= pageCount; i++) {
const page = await pdfDoc.getPage(i);
// Process every single page...
}
The fix - fail fast with limits:
// ✅ Protect the browser
const MAX_PAGES = 50;
const MAX_TEXT_LENGTH = 100_000;
if (pageCount > MAX_PAGES) {
throw new Error(
`PDF has ${pageCount} pages. Maximum allowed is ${MAX_PAGES} pages.`
);
}
// Also check text length incrementally
totalLength += pageText.length;
if (totalLength > MAX_TEXT_LENGTH) {
throw new Error(
"PDF content exceeds maximum allowed length. Please use a shorter resume."
);
}
String Concatenation Performance
One more optimization - repeated string concatenation is O(n²):
// ❌ Quadratic time complexity
let allText = "";
for (let i = 1; i <= pageCount; i++) {
allText += pageText + "\n\n";
}
Array + join is O(n):
// ✅ Linear time complexity
const pageTexts: string[] = [];
for (let i = 1; i <= pageCount; i++) {
pageTexts.push(pageText.trim());
}
return pageTexts.join("\n\n");
For a 2-page resume, it doesn't matter. For a 50-page document, it does.
The Final Architecture
app/
├── api/
│ └── analyze-resume/
│ └── route.ts # Streaming API endpoint
├── dashboard/
│ └── page.tsx # Server component with auth check
components/
├── dashboard/
│ ├── resume-analyzer-form.tsx # Main form (client)
│ └── dashboard-header.tsx # Header with logout
├── resume-upload.tsx # PDF/text upload with drag-drop
├── job-description-input.tsx # Job description textarea
└── analysis-results.tsx # Streaming markdown renderer
lib/
├── ai/
│ ├── index.ts # Barrel export
│ └── resume-analyzer.ts # AI streaming logic
├── pdf-parser.ts # PDF text extraction
└── validations/
└── resume-analysis.ts # Shared Zod schema + constants
TL;DR - What I Learned
If you scrolled straight here (no judgment), here's the cheat sheet:
| Problem | Solution |
|---|---|
| PDF.js worker fails in Next.js | Use legacy build + import.meta.url for worker path |
node_modules path in production |
Remove it or copy files to public/
|
Zod v4 required_error doesn't exist |
Use error property instead |
Zod v4 errors array |
Use issues array instead |
| Validation limits duplicated | Constants as single source of truth |
| Race conditions in file upload | Guard with isProcessing flag everywhere |
| Large PDFs freeze browser | Add max page and text length limits |
| String concatenation is slow | Use array + join instead |
Why Open Source?
Every bug I hit, every fix I found - it's all in the repo. Not because I'm proud of the bugs, but because someone else will hit the same issues.
If this post saves you even one hour of Googling "pdfjs-dist worker not working next.js", it was worth writing.
The repo: github.com/ImAbdullahJan/carific.ai
⭐ If you find this useful, consider starring the repo - it helps others discover the project!
What's Next
The resume analyzer is live, but there's more to build:
- 🔐 OAuth providers - Google and GitHub sign-in
- 💾 Save analysis history - Review past analyses
- 📊 Resume scoring trends - Track improvement over time
- 🎯 ATS optimization - Specific keyword recommendations
I'll be documenting each feature as I build it. Follow along if you want to see how an open-source project evolves.
Your Turn
I'd love feedback:
- On the code: See something that could be better? Open an issue or PR.
- On the post: Too long? Missing something? Tell me.
- On AI features: What would you want in a resume analyzer?
Building in public only works if there's a public to build with.
If this post helped you, drop a ❤️. It means more than you know.
Let's connect:
- 🐙 GitHub: @ImAbdullahJan - ⭐ Star the repo if you found this helpful!
- 🐦 Twitter/X: @abdullahjan - Follow for more dev content
- 👥 LinkedIn: abdullahjan - Let's connect
Second post of many. See you in the next one.
Top comments (0)