DEV Community

Cover image for Ollama eBook Summary: A Different Way to Chat with PDF
CogntitiveTech
CogntitiveTech

Posted on • Updated on

Ollama eBook Summary: A Different Way to Chat with PDF

Introduction

Last year, I began manually summarizing a collection of books to integrate psychological concepts from various sources. After a week of work, I'd only covered a few chapters of the first book, realizing the task's time-consuming nature. This prompted my quest to learn LLM.

Over the next six months, I immersed myself in the world of Large Language Models (LLMs). I explored various models, discovering which ones were best suited for my specific task. Through careful fine-tuning, I worked towards achieving production-quality consistency in the results. The outcome of this effort is a powerful content curation tool that has transformed my workflow. It not only accelerates my learning process but also empowers me to share knowledge more readily, without the need for extensive manual content creation.

A different way to chat with PDF

While my current focus is on eBook summaries, this project represents a fundamental shift in how we can interact with PDFs and other document formats. The conventional approach to working with documents typically involves chunking them and inserting them into a Retrieval-Augmented Generation (RAG) enabled database. This method allows an LLM to search documents and answer queries based on its findings. However, this approach often lacks precision and comprehensiveness.

My method, while similar in some aspects, introduces a crucial difference. I pay meticulous attention to the chunking process, ensuring that documents are divided according to their inherent structure – respecting chapter boundaries. This preserves the logical flow and context of the original material. From there, I chunk each chapter individually and direct my queries to specific sections of the document. This targeted approach yields more accurate and precise knowledge of each subsection within a document.

Mistral 7b Instruct v0.2 - Bulleted Notes

To achieve consistent, high-quality summaries in a standardized format, I fine-tuned the Mistral 7b Instruct v0.2 model. This custom model specializes in creating bulleted note summaries.

Models available on Huggingface

You can find the base model, GGUF, and LoRA versions in this Hugging Face collection.

Image description

Models available on Ollama.com

Mistral 7b Instruct v0.2 Bulleted Notes quants of various sizes are available, along with Mistral 7b Instruct v0.3 GGUF loaded with template and instructions for creating the sub-title's of our chunked chapters.

Ollama eBook Summary: Bringing It All Together

To streamline the entire process, I've developed a Python-based tool that automates the division, chunking, and bulleted note summarization of EPUB and PDF files with embedded ToC metadata. While PDFs currently require a built-in clickable ToC to function properly, EPUBs tend to be more forgiving.

You can explore and contribute to this project on GitHub: ollama-ebook-summary.

Beyond Summaries: Arbitrary Queries

Once a book is split into manageable chunks, we create a bulleted note summary for each section. The end result is a markdown document that distills even a 1000-page book into content that can be reviewed in just a couple of hours. But the possibilities don't end there. Once chunked, you can pose arbitrary questions to the document. For instance, asking "What questions does this text answer?" or "What arguments does this text make?" can quickly reveal the core ideas of a research paper or book chapter. This feature is particularly valuable when reviewing numerous research papers. By asking targeted questions, you can swiftly filter out less relevant materials and focus on the most pertinent information for your needs.

Looking Ahead: Future Developments

As we continue to refine and expand this tool, we're exploring new chunking methods for various file types, including Markdown, raw PDF, raw TXT, Word documents, and additional eBook formats. We welcome contributions through our GitHub repository. Whether you're a developer, researcher, or enthusiast, your input can help shape the future of this project.

Stay tuned for the upcoming launch of our paid web application, which will make this powerful tool even more accessible to a wider audience.

Image description

I hope you'll find this tool as invaluable as I do.

Whether you're a student, researcher, writer, or simply an avid reader, the eBook summary tool can transform how you interact with and extract knowledge from documents. We invite you to try it out, contribute to its development, and join in revolutionizing the way we interact with and reason around knowledge in the digital age.

Background

Instead of spending weeks per summary, I completed my first 9 book summaries in only 10 days.

While working on this I stumbled upon the paper Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models (2024-02-19; Mosh Levy, Alon Jacoby, Yoav Goldberg), which suggests that these models reasoning capacity drops off pretty sharply from 250 to 1000 tokens, and begin flattening out between 2000-3000 tokens.

This confirms my own experience in creating comprehensive bulleted notes while summarizing many long documents, and provides clarity in the context length required for optimal use of the models.

Top comments (2)

Collapse
 
coco_99 profile image
itnora

great job

Collapse
 
cognitivetech profile image
CogntitiveTech

thanks! I appreciate that.