Llama 3.2-Vision is a multimodal large language model available in 11B and 90B sizes, capable of processing both text and image inputs to generate text outputs. The model excels in visual recognition, image reasoning, image description, and answering image-related questions, outperforming existing open-source and closed-source multimodal models across multiple industry benchmarks.
Find Your Best Next.js Starters
Llama 3.2-Vision Examples
Handwriting
Optical Character Recognition (OCR)
In this article I will describe how to call the Llama 3.2-Vision 11B modeling service run by Ollama and implement image text recognition (OCR) functionality using Ollama-OCR.
Features of Ollama-OCR
🚀 High accuracy text recognition using Llama 3.2-Vision model
📝 Preserves original text formatting and structure
🖼️ Supports multiple image formats: JPG, JPEG, PNG
⚡️ Customizable recognition prompts and models
🔍 Markdown output format option
💪 Robust error handling
MacOS Vision OCR: Accurate and Fast OCR Tool for macOS
Installing Ollama
Before you can start using Llama 3.2-Vision, you need to install Ollama, a platform that supports running multimodal models locally. Follow the steps below to install it:
- Download Ollama: Visit the official Ollama website to download the installation package for your operating system.
- Install Ollama: Follow the prompts to complete the installation according to the downloaded installation package.
Install Llama 3.2-Vision 11B
After installing Ollama, you can install the Llama 3.2-Vision 11B model with the following command:
ollama run llama3.2-vision
How to use Ollama-OCR
npm install ollama-ocr
# or using pnpm
pnpm add ollama-ocr
OCR
import { ollamaOCR, DEFAULT_OCR_SYSTEM_PROMPT } from "ollama-ocr";
async function runOCR() {
const text = await ollamaOCR({
filePath: "./handwriting.jpg",
systemPrompt: DEFAULT_OCR_SYSTEM_PROMPT,
});
console.log(text);
}
Input Image:
Output:
The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of instruction-tuned image reasoning generative models in 118 and 908 sizes (text + images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks.
Markdown Output
import { ollamaOCR, DEFAULT_MARKDOWN_SYSTEM_PROMPT } from "ollama-ocr";
async function runOCR() {
const text = await ollamaOCR({
filePath: "./trader-joes-receipt.jpg",
systemPrompt: DEFAULT_MARKDOWN_SYSTEM_PROMPT,
});
console.log(text);
}
Input Image:
Output:
Use MiniCPM-V 2.6 Vision Model
async function runOCR() {
const text = await ollamaOCR({
model: "minicpm-v",
filePath: "./handwriting.jpg.jpg",
systemPrompt: DEFAULT_OCR_SYSTEM_PROMPT,
});
console.log(text);
}
ollama-ocr is using a local vision model, if you want to use the online Llama 3.2-Vision model, try the llama-ocr library.
Top comments (14)
Can you please what the speed is like for a image-based pdf with over 60 pages?
Recognition speed depends on the performance of your current device.
how to add other language like asia language in UTF8. thanks
You can learn about MiniCPM-V
thanks you so much let me try.
You're welcome, for higher precision OCR scenarios, macOS platform, interested to try macos-vision-ocr.
Possible to use 90B model?
ollama run llama3.2-vision:90b
ollama.com/library/llama3.2-vision...
Does it hallucinate with large tables with lot of numbers ?
You can test the recognition of the Llama-3.2-90B-Vision at the llamaocr website.
Damn needed this.
Other application scenarios for Llama 3.2 Vision, if you are interested.
Can local ollama minicpm-v be used
The minicpm-v visual model is also supported with the following code:
Some comments may only be visible to logged-in visitors. Sign in to view all comments.