DEV Community

Artem Kirkor
Artem Kirkor

Posted on

DNPR (DOM-Native PDF Rendering). A Different Way to Think About PDFs

In 2003, Apple introduced the <canvas> element in WebKit. It was a clever hack - draw pixels directly in the browser. Nobody designed it for PDF rendering. Yet here we are in 2026, and every major browser-based PDF editor still rasterizes documents to canvas. pdf.js, PDFium, all of them.

The question nobody seems to ask: what if the document was an object, not an image?

The problem with Canvas

Open DevTools on any PDF editor. Youll see a <canvas> element - the entire document rendered as pixels. Below it, a separate floating text layer used for selection and search. Two completely disconnected systems pretending to be one unified document.

Remove the <canvas>. Whats left are ghost text placeholders hanging in the air with no connection to anything structural. Thats not a bug. Thats the architecture.

This means:

  • every edit requires selecting a specific tool first
  • graphics cant be manipulated natively - theyre pixels inside pixels
  • accessibility is always bolted on, never built in
  • AI gets a flat image to guess at, not a structured tree to work with

How DNPR works differently

Instead of rasterizing to canvas, DNPR parses the raw PDF 2.0 binary structure directly. Each element is mapped to a corresponding native DOM node - preserving structure, position and content without rasterization.

Text becomes contentEditable spans. Graphics become SVG paths. Images become DOM elements. Every PDF element is a live, individually addressable DOM node - supporting deletion, replacement, resize and repositioning through standard browser APIs.

Built on PDF 2.0 (ISO 32000-2:2020) - implementing all 73 content stream operators. Native support for 256-bit AES encryption, CAdES digital signatures, UTF-8 text encoding, Document Security Store and tagged PDF namespaces.

The approach is formally designated DOM-Native PDF Rendering (DNPR), subject of U.S. Provisional Patent Application #63/997,982, filed March 6, 2026.

Why it matters

  • Editing speed - no tool switching before every action. Click and edit directly, like a web page. Benchmark: 50 editing steps on the same document - DNPR completes the task with significantly fewer interactions
  • Graphics control - images and vectors are real DOM objects, not pixels inside pixels
  • AI at core level - AI gets a structured object tree, not a flat image. Change entire color scheme via single API call in ~200ms vs days of manual work via Canvas
  • Privacy by default - DNPR is serverless, runs entirely on the client side. Browser is one of many runtimes. File never leaves the machine

The core difference

Canvas  →  document as an image  →  no control
DNPR    →  document as an object →  full control
Enter fullscreen mode Exit fullscreen mode

Where this goes

Canvas made sense when PDF was designed for print and the web was read-only. In 2026 documents are workflows - they get edited, signed, automated, fed into AI pipelines. A rendering architecture that treats every document as a flat image is a fundamental mismatch with how documents are actually used today.

DNPR is an attempt to fix that at the architecture level - not with a better tool layer on top of canvas, but by replacing the foundation entirely.

Spent 16 months building this. Filed a US patent for the approach (patent pending, App. #63/997,982).

Demo: pdfox.cloud/v2/demo

Top comments (0)