DEV Community

Pranav Mailarpawar
Pranav Mailarpawar

Posted on

How I Built a PDF to JPG Converter That Renders at 600 DPI Inside a Browser Tab

The complete engineering story behind high-resolution PDF-to-image conversion using PDF.js, canvas memory management, and device-adaptive processing

Most online PDF to JPG converters cap output at 150 DPI. Some go to 300 DPI if you pay. Very few reach 600 DPI, and those that do require uploading your file to their servers.
The PDF to JPG converter inside ihatepdf.cv supports up to 600 DPI output — completely free, with zero server upload. Every pixel is rendered locally in your browser. Here's exactly how it works, why 600 DPI matters, and what engineering problems had to be solved to make it work on devices ranging from an iPhone to a 32GB workstation.

Why DPI matters for PDF to image conversion
DPI stands for dots per inch — it describes how many pixels represent each inch of the original document.
72 DPI is the browser's base resolution. One CSS pixel equals one device pixel at 1× zoom. This is what you get from a naive canvas.toBlob() call without any scaling. Fine for a thumbnail. Terrible for anything else.
150 DPI is adequate for screen viewing and social media. Text is sharp enough to read. Images look acceptable. File sizes are reasonable.
300 DPI is the standard for professional printing. Business cards, brochures, and office documents are typically printed at 300 DPI. This is what most professional tools default to.
600 DPI is for archival purposes, large-format printing, and situations where you need to zoom into the output image and still see crisp detail — scanning workflows, medical records, engineering drawings, high-resolution reproductions.
The way ihatepdf.cv achieves these DPI targets is by treating DPI as a scale multiplier from the browser's 72 DPI base:
javascriptconst dpiToScale = (dpi) => dpi / 72;

// 150 DPI → 2.08x scale
// 300 DPI → 4.17x scale
// 600 DPI → 8.33x scale
That 8.33× scale at 600 DPI is where the engineering gets interesting.

The canvas size problem
Browsers impose a hard limit on canvas dimensions: 16,384 pixels on most modern browsers (Chrome, Firefox, Safari). At 8.33× scale, a standard A4 PDF page (595 × 842 points at 72 DPI) becomes:
595 × 8.33 = 4,956 px wide
842 × 8.33 = 7,014 px tall
That's within the 16,384 limit for a standard page. But a legal-size document, a wide-format architectural drawing, or a landscape slide deck at 600 DPI can easily exceed it.
The solution is getOptimalScale():
javascriptconst getOptimalScale = (viewport, requestedScale) => {
const maxDimension = 16384;
const testWidth = viewport.width * requestedScale;
const testHeight = viewport.height * requestedScale;

if (testWidth > maxDimension || testHeight > maxDimension) {
const scaleFactor = Math.min(
maxDimension / viewport.width,
maxDimension / viewport.height
);
return scaleFactor * 0.95; // 5% safety margin
}
return requestedScale;
};
Before rendering any page, the tool calculates whether the requested scale would exceed the canvas limit. If it would, it automatically reduces the scale to the maximum safe value for that specific page's dimensions. The 5% safety margin accounts for browsers that enforce 16,383 rather than 16,384.

Device pixel ratio — the hidden multiplier
Modern screens have device pixel ratios above 1×. A MacBook Pro Retina display is 2×. Some Android phones are 3×. ihatepdf.cv accounts for this:
javascriptconst renderPageToCanvas = async (page, targetScale) => {
const viewport = page.getViewport({ scale: targetScale });
const pixelRatio = Math.min(window.devicePixelRatio || 1, 2); // cap at 2×

canvas.width = Math.floor(viewport.width * pixelRatio);
canvas.height = Math.floor(viewport.height * pixelRatio);

const ctx = canvas.getContext('2d', {
alpha: false, // white background — JPEG has no alpha channel
willReadFrequently: false
});

ctx.fillStyle = 'white';
ctx.fillRect(0, 0, canvas.width, canvas.height);
ctx.scale(pixelRatio, pixelRatio);
ctx.imageSmoothingEnabled = true;
ctx.imageSmoothingQuality = 'high';

await page.render({
canvasContext: ctx,
viewport,
intent: 'print', // not 'display' — higher quality rendering
enableWebGL: false,
renderInteractiveForms: false,
}).promise;

return canvas;
};
Three things worth noting here:
alpha: false — PDFs have no transparent background. Setting alpha to false avoids the browser creating an alpha channel it never needs, saving memory.
intent: 'print' — PDF.js has two rendering intents: display and print. Print intent uses higher-quality glyph rendering and anti-aliasing, which produces noticeably sharper text especially at high DPI.
pixelRatio capped at 2× — Going to 3× on a high-DPI phone would triple memory usage for a visual improvement that's imperceptible at normal viewing sizes. The cap prevents memory exhaustion on mobile.

Memory management — the real challenge
This is what separates a tool that actually works from one that crashes your browser tab.
At 600 DPI, each A4 page canvas uses approximately:
4,956 × 7,014 px × 4 bytes (RGBA) = ~139 MB of RAM
Plus an equivalent amount of GPU texture memory for the canvas. Plus the PDF.js rendering buffers. For a 50-page document at 600 DPI, the naive approach allocates ~7 GB — which immediately crashes any browser.
The solution is explicit canvas disposal after each page:
javascriptconst canvas = await renderPageToCanvas(page, optimizedScale);

// ... create blob, trigger download ...

canvas.width = 0; // ← releases GPU texture memory immediately
canvas.height = 0;
// canvas goes out of scope → GC collects RAM
Setting canvas dimensions to zero is not obvious. Simply removing the canvas reference doesn't immediately release GPU memory in most browsers — the GPU texture allocation persists until the browser's garbage collector runs, which can be seconds later. Setting width and height to zero forces immediate GPU memory deallocation.
Between batches, the tool adds a deliberate 2-second pause:
javascriptif (batchIndex < batches.length - 1) {
await new Promise(resolve => setTimeout(resolve, 2000));
if (window.gc) window.gc(); // hint — browser may ignore
}
Chrome's garbage collector typically triggers after ~1–1.5 seconds of idle time. The 2-second pause gives it time to run and reclaim memory before the next batch begins.

Device-adaptive limits
The same code runs on a 2GB RAM phone and a 32GB workstation. Rather than applying one-size-fits-all limits, ihatepdf.cv detects device capabilities and adjusts automatically:
javascriptconst getDeviceCapabilities = () => {
const isMobile = /Android|iPhone/i.test(navigator.userAgent);
const isTablet = /(tablet|ipad)/i.test(navigator.userAgent);
const deviceMem = navigator.deviceMemory || 4; // not available in Safari

if (isMobile && screen.width < 768) {
return { maxFileSize: 50 * 1024 * 1024, maxDPI: 300, maxPagesPerBatch: 10 };
}
if (isTablet) {
return { maxFileSize: 75 * 1024 * 1024, maxDPI: 450, maxPagesPerBatch: 25 };
}
if (deviceMem < 4) {
return { maxFileSize: 100 * 1024 * 1024, maxDPI: 450, maxPagesPerBatch: 30 };
}
return { maxFileSize: 150 * 1024 * 1024, maxDPI: 600, maxPagesPerBatch: 50 };
};
A phone user still gets PDF to JPG conversion — just capped at 300 DPI and 10 pages per batch instead of 600 DPI and 50 pages. They get the tool, scaled to what their device can handle.
Before any large conversion, memory usage is estimated:
javascriptconst estimateMemoryUsage = (fileSize, pageCount, scale, format) => {
const baseMemoryPerPage = 5 * 1024 * 1024; // 5 MB at scale 1.0
const scaleFactor = Math.pow(scale, 2); // quadratic: 2× scale = 4× memory
const formatMultiplier = format === 'png' ? 1.5 : 1.0;
const estimated = pageCount * baseMemoryPerPage * scaleFactor * formatMultiplier;
return { estimated, withSafety: estimated * 1.5 };
};
Memory scales quadratically with DPI — doubling DPI quadruples memory usage because both dimensions double. A user trying to convert 50 pages at 600 DPI sees a warning before the browser runs out of memory rather than a silent crash.

JPEG vs PNG — when to use each
ihatepdf.cv supports both output formats. The choice matters:
JPEG — lossy compression. Adjustable quality from 60% to 100%. 300 DPI JPEG at 85% quality is typically 500KB–2MB per page. The right choice for photos, scanned documents, presentations — anything where some quality loss is imperceptible in practice.
PNG — lossless compression. No quality setting. 300 DPI PNG is typically 3–8MB per page. The right choice for documents with sharp lines, text-heavy pages, technical diagrams, or any situation where pixel-perfect reproduction is required.
For archival purposes at 600 DPI, PNG is almost always the right answer — the file sizes are large but the quality guarantee is absolute.

The four DPI presets and what they're for
Rather than exposing a raw DPI slider and leaving users to guess, ihatepdf.cv maps to four practical use cases:
PresetDPIJPEG QualityBest forWeb15085%Social media, email, web embeddingPrint30095%Office printing, CVs, brochuresProfessional50098%High-end printing, detailed documentsArchival600100%Maximum quality, large format, archiving
The Archival preset is only shown on devices that can handle it — it is hidden on phones and tablets where it would cause memory issues.

Privacy — verifiable, not just claimed
The entire conversion pipeline runs locally:
javascript// The complete data lifecycle — no network
FileReader.readAsArrayBuffer(file) // → browser memory
→ pdfjsLib.getDocument({ data }) // → PDF.js processing (local)
→ page.render({ canvasContext }) // → Canvas API (local)
→ canvas.toBlob('image/jpeg') // → Blob in memory
→ URL.createObjectURL(blob) // → local object URL
→ anchor.click() // → device storage
// Zero network requests for file data
Open DevTools → Network tab → convert a PDF to JPG. The upload column shows 0 bytes for your document. Not a policy. Not a claim. A verifiable architectural fact.

Try it
ihatepdf.cv/pdf-to-jpg
Free. No account. No upload. No watermark on output. Supports JPEG and lossless PNG up to 600 DPI — the same output quality as tools that charge $20/month, running entirely in your browser tab.
If you process high-resolution documents professionally and have questions about the implementation, or if you find edge cases I haven't handled yet, I read comments.

Part of an ongoing series on building a privacy-first PDF toolkit entirely in the browser. The full architecture overview is at ihatepdf.cv/technical-blog. The compression deep-dive is at ihatepdf.cv/compress-pdf.ShareContentimport React, {
useState,
useEffect,
useRef,
useCallback,
Suspense,
} from "react";
import { useNavigate, useLocation } from "react-router-dom";
import ClarityAnalytics from "./ClarityAnalytics";
import {
ChevronDown,
ArrowRight,
ChevronUp,
LayoutDashboard,
Sparkles,
} from "lucipastedimport React, { useState, useEffect } from "react";
import {
FileText,
Cpu,
HardDrive,
Zap,
Shield,
ChevronRight,
Code,
Database,
Layers,
Brain,
Lock,
Activity,
MessageSquare,
Key,
} from "lucide-react";

function Blog() {
const [activeSection, setActiveSection] = useStpasted#!/usr/bin/env node
/**

  • generate-seo-pages.js — ihatepdf.cv
  • Generates unique index.html for every tool route + static blog pages.
  • BUILD: "vite build && node generate-seo-pages.js" */ import fs from 'fs' import path from 'path' import { fileURLToPath } from 'url' import { POSTS } from '.pasted<!DOCTYPE html>

<link rel="preconnect" href="https://unpkg.com" crossoriginpasted// blog-posts.js
// ─────────────────────────────────────────────────────────────────
// All blog posts for ihatepdf.cv
// Each post supports: slug, title, description, date, readTime,
// keywords, content, relatedPosts, relatedTools
// ───────────────────────────────────────────────────────────────pasted#!/usr/bin/env node
/**

  • generate-seo-pages.js — ihatepdf.cv
  • Generates unique index.html for every tool route + static blog pages.
  • BUILD: "vite build && node generate-seo-pages.js" / import fs from 'fs' import path from 'path' import { fileURLToPath } from 'url' import { POSTS } from '.pasted#!/usr/bin/env node /*
  • generate-seo-pages.js — ihatepdf.cv
  • Generates unique index.html for every tool route + static blog pages.
  • BUILD: "vite build && node generate-seo-pages.js" */ import fs from 'fs' import path from 'path' import { fileURLToPath } from 'url' import { POSTS } from '.pastedimport React, { useState, useEffect } from "react";

const readFileAsArrayBuffer = (file) => {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onload = () => resolve(reader.result);
reader.onerror = reject;
reader.readAsArrayBuffer(file);
});
};
pasted1 Blogger https://www.blogger.com 100
2 WordPress.com https://wordpress.com 100
3 LinkedIn Articles https://www.linkedin.com 99
4 Reddit (Article Submissions) https://www.reddit.com 99
5 Google Sites https://sites.google.com 98
6 Medium https://medium.com 96
7 GitHub Pages https://pages.github.com 9pasted<!DOCTYPE html>










<link rel="preconnect" href="https://unpkg.com" crossoriginpasted#!/usr/bin/env node
/**

  • generate-seo-pages.js — ihatepdf.cv
  • Generates unique index.html for every tool route + static blog pages.
  • BUILD: "vite build && node generate-seo-pages.js" */ import fs from 'fs' import path from 'path' import { fileURLToPath } from 'url' import { POSTS } from '.pastedimport React, { useState, useEffect, useRef } from "react";

// ── Scroll progress hook ──────────────────────────────────────────────────────
function useScrollProgress() {
const [progress, setProgress] = useState(0);
useEffect(() => {
const onScroll = () => {
const total = document.dpasted#!/usr/bin/env node
/**

  • generate-seo-pages.js — ihatepdf.cv
  • Generates unique index.html for every tool route + static blog pages.
  • BUILD: "vite build && node generate-seo-pages.js" */ import fs from 'fs' import path from 'path' import { fileURLToPath } from 'url' import { POSTS } from '.pastedimport React, { useState, useEffect } from "react";

const IDB_STORE_NAME = "ihatepdf-store";
const IDB_DB_NAME = "ihatepdf_DB";
const HISTORY_KEY = "ihatepdf_history";

const initDB = () => {
return new Promise((resolve, reject) => {
const request = indexedDB.open(IDB_DB_NAME, 1);
requestpastedimport React, { useState, useEffect } from "react";

const IDB_STORE_NAME = "ihatepdf-store";
const IDB_DB_NAME = "ihatepdf_DB";
const HISTORY_KEY = "ihatepdf_history";

const readFileAsArrayBuffer = (file) => {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reapasted

Top comments (0)