π§ Build a Text Summarizer Using Local Ollama (phi3) + LangChain
In this tutorial, weβll build a text summarizer using:
- π¦ Ollama (running locally)
- π€ phi3 model
- π LangChain
- π Python
By the end, youβll have a working CLI-based text summarizer that runs fully offline.
π What Weβre Building
Input: Long text
Output: Clean summarized text
Model: phi3 running locally via Ollama
π₯οΈ Step 1: Install Ollama
Linux / macOS
curl -fsSL https://ollama.com/install.sh | sh
Windows
Download installer from:
https://ollama.com
π Step 2: Pull the phi3 Model
After installation:
ollama pull phi3
Test it:
ollama run phi3
Type something like:
Summarize: Artificial Intelligence is transforming the world...
If it responds β β Ollama is working.
Exit with:
/exit
π Step 3: Create Python Environment
mkdir text_summarizer_project
cd text_summarizer_project
python -m venv venv
source venv/bin/activate # Linux/macOS
# OR
venv\Scripts\activate # Windows
π¦ Step 4: Install Required Packages
pip install langchain langchain-ollama langchain-core
π Step 5: Project Structure
Create this structure:
text_summarizer_project/
β
βββ main.py
βββ utils/
βββ loader.py
βββ write_output.py
βββ constants.py
π§± Step 6: Create Supporting Files
π utils/constants.py
information = """
Artificial Intelligence (AI) is a branch of computer science
that aims to create systems capable of performing tasks
that normally require human intelligence...
"""
summary_template = """
You are a helpful AI assistant.
Summarize the following text in a concise paragraph:
{information}
"""
πΎ utils/write_output.py
def write_output(text, filename):
with open(filename, "w", encoding="utf-8") as f:
f.write(text)
β³ utils/loader.py (Simple Terminal Spinner)
import sys
import threading
import time
class Loader:
def __init__(self, message, success, fail, color="cyan"):
self.message = message
self.success = success
self.fail = fail
self.done = False
def spinner(self):
while not self.done:
for char in "|/-\\":
sys.stdout.write(f"\r{self.message} {char}")
sys.stdout.flush()
time.sleep(0.1)
def __enter__(self):
self.thread = threading.Thread(target=self.spinner)
self.thread.start()
def __exit__(self, exc_type, exc_value, traceback):
self.done = True
self.thread.join()
if exc_type:
print(f"\rβ {self.fail}")
else:
print(f"\rβ
{self.success}")
π§ Step 7: Create main.py
from langchain_core.prompts import PromptTemplate
from langchain_ollama import ChatOllama
from utils.loader import Loader
from utils.write_output import write_output
from utils.constants import information, summary_template
def text_summarizer():
print("π§ Local Text Summarizer Using phi3")
summary_prompt_template = PromptTemplate(
input_variables=["information"],
template=summary_template
)
llm = ChatOllama(
model="phi3",
temperature=0
)
chain = summary_prompt_template | llm
try:
with Loader(
message="Agent is thinking...",
success="Response received!",
fail="Model crashed!"
):
response = chain.invoke({"information": information})
write_output(response.content, "output.txt")
print("π Summary written to output.txt")
except Exception as e:
print("Error:", e)
if __name__ == "__main__":
text_summarizer()
βΆοΈ Step 8: Run the Project
Make sure:
- Ollama is installed
-
phi3is pulled - Your virtual environment is activated
Run:
python main.py
You should see:
Agent is thinking...
β
Response received!
Summary written to output.txt
Open output.txt to see the result.
π§ͺ How It Works
1οΈβ£ PromptTemplate
Formats the user input into a proper instruction.
2οΈβ£ ChatOllama
Connects to locally running Ollama model.
3οΈβ£ LCEL Operator (|)
This line:
chain = summary_prompt_template | llm
Means:
Prompt β LLM β Response
π₯ Optional Improvements
You can extend this project by:
- Accepting user input dynamically
- Adding a CLI argument parser
- Building a FastAPI wrapper
- Converting into a web app
- Streaming responses instead of waiting
β οΈ Troubleshooting
β ModuleNotFoundError
Check:
- You activated virtual environment
- All files exist
- You are running from correct directory
β Model not found
Run:
ollama pull phi3
π― Final Result
You now have a fully offline:
- LLM-powered
- Terminal-based
- Extensible text summarizer
Running locally on your machine π₯
Top comments (0)