DEV Community

Cover image for New: Vision Language Models for Document Processing
Sarah Guthals, PhD for Tensorlake

Posted on • Originally published at tensorlake.ai

New: Vision Language Models for Document Processing

We've expanded our use of Vision Language Models (VLMs) across multiple DocumentAI features for faster and more accurate document processing on documents with hundreds of pages:

  • Page Classification: Identify relevant pages in large documents
  • Figure and Table Summarization: Extract insights from visual elements
  • Structured Extraction (with skip_ocr): Direct visual understanding for more accurate extraction on harder to parse documents (e.g. scanned documents, engineering diagrams, or documents with complex reading order)

This post focuses on our enhanced page classification capabilities for demonstration. With VLM support, you can quickly process large documents by identifying and extracting from only relevant pages.

Try it in this Colab Notebook.

Key Improvements

Scale & Performance

  • Handle Large Documents: Classify documents with hundreds of pages without performance degradation
  • VLM-Powered Classification: Replaced OCR with Vision Language Models for faster, more accurate classification
  • Selective Processing: Only parse pages that matter, reducing processing time and costs

Recommended Workflow

  1. Classify First: Use the classify endpoint to identify relevant pages based on your criteria
  2. Parse Selectively: Set page_range to only process the classified relevant pages
  3. Extract Efficiently: Apply structured extraction only to pages containing the information you need

Use Case Example: SEC Filings Analysis

This approach is particularly powerful for extracting specific information from lengthy documents like SEC filings. For example, when analyzing cryptocurrency holdings across multiple companies' 10-K and 10-Q reports:

  • Challenge: Each filing can be 100-200+ pages, but crypto-related information might only appear on 10-20 pages
  • Solution: First classify pages containing "digital assets holdings", then extract structured data only from those pages
  • Result: 80-90% reduction in processing time and more focused, accurate extractions

Code Example

from tensorlake.documentai import DocumentAI, PageClassConfig

doc_ai = DocumentAI()

# Step 1: Classify pages
page_classifications = [
    PageClassConfig(
        name="digital_assets_holdings",
        description="Pages showing cryptocurrency holdings on balance sheet..."
    )
]

parse_id = doc_ai.classify(
    file_url=filing_url,
    page_classifications=page_classifications
)

result = doc_ai.wait_for_completion(parse_id=parse_id)

# Step 2: Parse only relevant pages
relevant_pages = result.page_classes[0].page_numbers
page_range = ",".join(str(i) for i in relevant_pages)

final_result = doc_ai.parse_and_wait(
    file=filing_url,
    page_range=page_range,
    structured_extraction_options=[...]
)
Enter fullscreen mode Exit fullscreen mode

Benefits

  • Cost Efficiency: Process only what you need
  • Speed: Reduce processing time by focusing on relevant content
  • Accuracy: VLM classification provides better understanding of page content
  • Scalability: Handle large document sets without compromising performance

Try It Out

Check out our example notebook demonstrating how to extract cryptocurrency metrics from SEC filings using the new classification approach.

Getting Started

Update to the latest version of Tensorlake:
pip install --upgrade tensorlake

Then start classifying, summarizing, and extracting with improved efficiency!

Top comments (0)