DEV Community

Vikas Choubey
Vikas Choubey

Posted on

Ollama phi3 based text summarizer

🧠 Build a Text Summarizer Using Local Ollama (phi3) + LangChain

In this tutorial, we’ll build a text summarizer using:

  • πŸ¦™ Ollama (running locally)
  • πŸ€– phi3 model
  • πŸ”— LangChain
  • 🐍 Python

By the end, you’ll have a working CLI-based text summarizer that runs fully offline.


πŸ“Œ What We’re Building

Input: Long text

Output: Clean summarized text

Model: phi3 running locally via Ollama


πŸ–₯️ Step 1: Install Ollama

Linux / macOS

curl -fsSL https://ollama.com/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

Windows

Download installer from:
https://ollama.com


πŸš€ Step 2: Pull the phi3 Model

After installation:

ollama pull phi3
Enter fullscreen mode Exit fullscreen mode

Test it:

ollama run phi3
Enter fullscreen mode Exit fullscreen mode

Type something like:

Summarize: Artificial Intelligence is transforming the world...
Enter fullscreen mode Exit fullscreen mode

If it responds β†’ βœ… Ollama is working.

Exit with:

/exit
Enter fullscreen mode Exit fullscreen mode

🐍 Step 3: Create Python Environment

mkdir text_summarizer_project
cd text_summarizer_project

python -m venv venv
source venv/bin/activate  # Linux/macOS
# OR
venv\Scripts\activate     # Windows
Enter fullscreen mode Exit fullscreen mode

πŸ“¦ Step 4: Install Required Packages

pip install langchain langchain-ollama langchain-core
Enter fullscreen mode Exit fullscreen mode

πŸ“ Step 5: Project Structure

Create this structure:

text_summarizer_project/
β”‚
β”œβ”€β”€ main.py
└── utils/
    β”œβ”€β”€ loader.py
    β”œβ”€β”€ write_output.py
    └── constants.py
Enter fullscreen mode Exit fullscreen mode

🧱 Step 6: Create Supporting Files


πŸ“ utils/constants.py

information = """
Artificial Intelligence (AI) is a branch of computer science
that aims to create systems capable of performing tasks
that normally require human intelligence...
"""

summary_template = """
You are a helpful AI assistant.

Summarize the following text in a concise paragraph:

{information}
"""
Enter fullscreen mode Exit fullscreen mode

πŸ’Ύ utils/write_output.py

def write_output(text, filename):
    with open(filename, "w", encoding="utf-8") as f:
        f.write(text)
Enter fullscreen mode Exit fullscreen mode

⏳ utils/loader.py (Simple Terminal Spinner)

import sys
import threading
import time

class Loader:
    def __init__(self, message, success, fail, color="cyan"):
        self.message = message
        self.success = success
        self.fail = fail
        self.done = False

    def spinner(self):
        while not self.done:
            for char in "|/-\\":
                sys.stdout.write(f"\r{self.message} {char}")
                sys.stdout.flush()
                time.sleep(0.1)

    def __enter__(self):
        self.thread = threading.Thread(target=self.spinner)
        self.thread.start()

    def __exit__(self, exc_type, exc_value, traceback):
        self.done = True
        self.thread.join()
        if exc_type:
            print(f"\r❌ {self.fail}")
        else:
            print(f"\rβœ… {self.success}")
Enter fullscreen mode Exit fullscreen mode

🧠 Step 7: Create main.py

from langchain_core.prompts import PromptTemplate
from langchain_ollama import ChatOllama
from utils.loader import Loader
from utils.write_output import write_output
from utils.constants import information, summary_template


def text_summarizer():
    print("🧠 Local Text Summarizer Using phi3")

    summary_prompt_template = PromptTemplate(
        input_variables=["information"],
        template=summary_template
    )

    llm = ChatOllama(
        model="phi3",
        temperature=0
    )

    chain = summary_prompt_template | llm

    try:
        with Loader(
            message="Agent is thinking...",
            success="Response received!",
            fail="Model crashed!"
        ):
            response = chain.invoke({"information": information})

        write_output(response.content, "output.txt")
        print("πŸ“„ Summary written to output.txt")

    except Exception as e:
        print("Error:", e)


if __name__ == "__main__":
    text_summarizer()
Enter fullscreen mode Exit fullscreen mode

▢️ Step 8: Run the Project

Make sure:

  • Ollama is installed
  • phi3 is pulled
  • Your virtual environment is activated

Run:

python main.py
Enter fullscreen mode Exit fullscreen mode

You should see:

Agent is thinking...
βœ… Response received!
Summary written to output.txt
Enter fullscreen mode Exit fullscreen mode

Open output.txt to see the result.


πŸ§ͺ How It Works

1️⃣ PromptTemplate

Formats the user input into a proper instruction.

2️⃣ ChatOllama

Connects to locally running Ollama model.

3️⃣ LCEL Operator (|)

This line:

chain = summary_prompt_template | llm
Enter fullscreen mode Exit fullscreen mode

Means:

Prompt β†’ LLM β†’ Response
Enter fullscreen mode Exit fullscreen mode

πŸ”₯ Optional Improvements

You can extend this project by:

  • Accepting user input dynamically
  • Adding a CLI argument parser
  • Building a FastAPI wrapper
  • Converting into a web app
  • Streaming responses instead of waiting

⚠️ Troubleshooting

❌ ModuleNotFoundError

Check:

  • You activated virtual environment
  • All files exist
  • You are running from correct directory

❌ Model not found

Run:

ollama pull phi3
Enter fullscreen mode Exit fullscreen mode

🎯 Final Result

You now have a fully offline:

  • LLM-powered
  • Terminal-based
  • Extensible text summarizer

Running locally on your machine πŸ”₯

Top comments (0)