DEV Community

Cover image for Vision Models for OCR: When They Beat Tesseract and When They Don't
Gabriel Anhaia
Gabriel Anhaia

Posted on

Vision Models for OCR: When They Beat Tesseract and When They Don't


A finance team at a mid-sized SaaS feeds 40,000 expense receipts a month
through Tesseract. Most are German supermarket prints with thermal-paper
fade. The accuracy floor on those receipts hovers around 60 percent
character-correct. The team's pragmatic answer in 2024 was a manual
review queue. The 2026 answer is a vision model wired in as a fallback
on the pages Tesseract is not confident about.

That second answer is cheaper than the first, more accurate than running
Tesseract alone, and a fraction of the cost of routing every page
through a VLM. The trick is the routing.

What each tool is actually good at

Tesseract has been the open-source default for over a decade. The 5.x
line uses an LSTM engine, runs on CPU, ships in every Linux distro, and
costs you nothing per page. On clean digital-born text and clean scans
it performs strongly — low single-digit character error rates are
common in published comparisons, and on the easiest pages it gets very
close to perfect. That accuracy falls apart on anything Tesseract was
not designed for: rotated photos, low contrast, handwriting, dense
table layouts, multi-column magazines, receipts printed on faded
thermal paper.

VLMs sit at the other end. Claude Sonnet 4.5, GPT-5, and Gemini 2.5 Pro
accept an image and emit text. They reason about layout. They guess
sensibly when a character is smudged. Across the OCR comparisons
published in 2026, frontier VLMs outperform Tesseract by a wide margin
on handwriting and bad-quality photos, and they outperform it on
heterogeneous document workloads where layout reasoning matters. On
clean printed text the gap shrinks and Tesseract is still competitive
with frontier VLMs.

The headline VLM accuracy numbers in vendor comparisons are usually
end-to-end extraction scores, not raw OCR. They include the model's
ability to find fields and place them into a schema, which is half of
the value on real document workloads. It is also why the same VLM
looks weaker on a pure-transcription benchmark than on an
invoice-extraction one.

The cost gap is not subtle

Tesseract per page: zero dollars in API fees, a few milliseconds of
CPU. On a $40-a-month server you can grind through millions of pages a
day. You pay the engineer who tunes preprocessing and the disk that
holds the binaries.

Claude Sonnet 4.5 per page: applying the tokens ≈ width × height ÷ 750
formula
,
a 1500×1000 image works out to about 2,000 input tokens. At the
published $3 per million input tokens
that is near $0.006 per image. Output adds maybe $0.005 if you ask for
structured JSON of a typical receipt. Round to about one cent per page.
Gemini 2.5 Pro and GPT-5 are estimates in the same ballpark per their
own pricing pages. These vendor numbers shift, so rerun the math
against the model card you actually deploy.

A million pages a month through a VLM is roughly $10,000. Through
Tesseract it is the cost of a CPU box. The hybrid pipeline lets you
keep the Tesseract economics for the 70 to 80 percent of pages where
Tesseract is fine, and only pay the VLM rate on the slice that needs
it.

The hybrid pipeline

The orchestrator is short. Run Tesseract first, ask it for per-word
confidence, fall back to a VLM only when confidence drops below a
threshold or the page returns suspiciously little text.

import base64
import re
from pathlib import Path

import pytesseract
from anthropic import Anthropic
from PIL import Image

MODEL = "claude-sonnet-4-5"
TESSERACT_LANG = "eng+deu"
CONFIDENCE_FLOOR = 70
MIN_CHARS = 40

client = Anthropic()


def tesseract_pass(path: Path) -> tuple[str, float]:
    img = Image.open(path)
    data = pytesseract.image_to_data(
        img,
        lang=TESSERACT_LANG,
        output_type=pytesseract.Output.DICT,
    )
    words = []
    confidences = []
    for word, conf in zip(data["text"], data["conf"]):
        if not word.strip():
            continue
        try:
            c = float(conf)
        except ValueError:
            continue
        if c < 0:
            continue
        words.append(word)
        confidences.append(c)
    text = " ".join(words)
    avg_conf = (
        sum(confidences) / len(confidences)
        if confidences
        else 0.0
    )
    return text, avg_conf


def vlm_pass(path: Path) -> str:
    img_bytes = path.read_bytes()
    b64 = base64.standard_b64encode(img_bytes).decode()
    media_type = (
        "image/png"
        if path.suffix.lower() == ".png"
        else "image/jpeg"
    )
    resp = client.messages.create(
        model=MODEL,
        max_tokens=1500,
        messages=[
            {
                "role": "user",
                "content": [
                    {
                        "type": "image",
                        "source": {
                            "type": "base64",
                            "media_type": media_type,
                            "data": b64,
                        },
                    },
                    {
                        "type": "text",
                        "text": (
                            "Transcribe every legible "
                            "character in this image. "
                            "Preserve line breaks. "
                            "Return only the text."
                        ),
                    },
                ],
            }
        ],
    )
    return resp.content[0].text


def needs_fallback(text: str, conf: float) -> bool:
    if conf < CONFIDENCE_FLOOR:
        return True
    if len(re.sub(r"\s", "", text)) < MIN_CHARS:
        return True
    return False


def ocr(path: Path) -> dict:
    text, conf = tesseract_pass(path)
    if not needs_fallback(text, conf):
        return {
            "text": text,
            "engine": "tesseract",
            "confidence": conf,
        }
    fallback = vlm_pass(path)
    return {
        "text": fallback,
        "engine": "vlm",
        "confidence": None,
        "tesseract_conf": conf,
    }
Enter fullscreen mode Exit fullscreen mode

Sixty lines, two engines, one routing rule. The interesting choices
are in the thresholds, not the code.

Picking the threshold

CONFIDENCE_FLOOR = 70 is not a magic number. Tesseract reports
confidence per word on a 0 to 100 scale. On clean print the average
sits in the 90s. On a faded thermal receipt it crashes into the 40s.
Anywhere in between is the band where Tesseract is sometimes right and
sometimes confidently wrong.

Pick the floor by sampling a few hundred pages from your real workload,
running Tesseract on each, and plotting the average confidence against
human-judged correctness. The floor lands at the inflection point on
that plot. For most teams it sits between 60 and 75. Anything stricter
sends too many easy pages to the VLM and bleeds budget. Anything
looser keeps garbage from Tesseract in the output.

The MIN_CHARS = 40 second condition catches a separate failure mode:
Tesseract sometimes returns a page-confidence of 90 because the four
letters it found were each crisp, but it missed 95 percent of the
page. A character-count floor traps that case before it reaches
production.

Where the hybrid breaks

Three patterns where this pipeline still misbehaves.

Mixed-quality pages. A page that is half clean text and half a
low-quality photo of a receipt taped to it. Tesseract handles the
printed half well, posts a healthy average confidence, and you never
reach the fallback even though the receipt half is unreadable. Fix
this by running Tesseract on detected layout regions, not the whole
page.

Confidently wrong Tesseract. Faded thermal prints can produce
sequences of 80-confidence words that read "OOO OOO" when the actual
line was "100,00". Confidence and correctness are not the same thing.
The mitigation is sampling: route a small random percentage of
high-confidence Tesseract pages through the VLM as a shadow check, log
disagreements, retrain the threshold weekly.

Multilingual mixed scripts. Tesseract handles one language at a
time well, two languages decently, three or more poorly. Set
TESSERACT_LANG to the dominant pair and let the VLM catch the rest
when confidence dips. Do not try to load every language pack at once;
Tesseract's accuracy degrades when the model has too many candidates.

What the math looks like at scale

A workload of one million pages a month, of which 75 percent route
through Tesseract and 25 percent need the VLM fallback, costs around
$2,500 a month in API fees plus the Tesseract host. Pure VLM on the
same workload runs about $10,000. Pure Tesseract runs about $40 in
compute, but you accept the lower accuracy and the manual review queue
that comes with it.

The hybrid wins when the workload mix is heterogeneous: a long tail of
hard pages that Tesseract gives up on, surrounded by a fat middle of
ordinary scans. That describes most production OCR pipelines outside
of pure-archive digitisation. If your workload is 100 percent clean
scanned text, stay with Tesseract. If it is 100 percent receipts and
handwriting, go straight to a VLM. Anywhere in between, route.

When to revisit the decision

Two things change the math. The first is VLM pricing. Frontier model
prices have trended down sharply over the last two years. The
threshold at which routing makes sense moves with them. Re-run the
cost model when a major price drop lands.

The second is specialised OCR models. Open-source vision-OCR-specific
models like PaddleOCR-VL beat frontier general LLMs on document OCR
benchmarks
while
costing fractions of a cent per page when self-hosted. They are the
third tier of the routing decision, and on workloads above 10 million
pages a month the self-host economics become hard to argue with. For
most teams under that scale, Tesseract plus a Claude or Gemini
fallback is the cheapest configuration that hits the accuracy bar.

If this was useful

The prompt that drives the VLM fallback is the most-mattering knob in
this pipeline. A bad prompt costs you accuracy on the exact pages where
the VLM is supposed to win. The Prompt Engineering Pocket Guide
covers structured-output prompting, multi-turn refinement on hard
images, and the patterns for asking a vision model to transcribe
without summarising or paraphrasing. If you ship a hybrid OCR pipeline
this quarter, that chapter is the one that pays for the book.

Prompt Engineering Pocket Guide: Techniques for Getting the Most from LLMs

Top comments (0)