DEV Community

CriticalMynd
CriticalMynd

Posted on

Testing Open Notebook: A Complete Walkthrough with Gemini AI

Open Notebook is an open-source alternative to Google's Notebook LM, offering more flexibility and control over your AI-powered research workflow. In this post, I'll walk you through a complete test of the application, showing how to set it up with Google Gemini and use it to process and analyze content.

Getting Started

Open Notebook runs in Docker and can be set up in minutes. I configured it with Google Gemini as the AI provider:

docker run -d --name open-notebook \
  -p 8502:8502 -p 5055:5055 \
  -v ./notebook_data:/app/data \
  -v ./surreal_data:/mydata \
  -e GOOGLE_API_KEY=your_gemini_key \
  -e SURREAL_URL="ws://localhost:8000/rpc" \
  -e SURREAL_USER="root" \
  -e SURREAL_PASSWORD="root" \
  -e SURREAL_NAMESPACE="open_notebook" \
  -e SURREAL_DATABASE="production" \
  lfnovo/open_notebook:v1-latest-single
Enter fullscreen mode Exit fullscreen mode

Once running, the application is accessible at http://localhost:8502.

Creating Your First Notebook

The interface is clean and intuitive. Creating a notebook is straightforward:

  1. Click "New Notebook"
  2. Enter a name (I used "Test Notebook")
  3. Click "Create Notebook"

Notebook Created

The notebook appears immediately in your list, ready for content.

Notebook Opened

Adding Your First Source

Open Notebook supports multiple source types: URLs, file uploads, and direct text input. I tested with a Wikipedia article about Artificial Intelligence.

Step 1: Source & Content

Click "Add Source" and select "Add New Source". The wizard guides you through three steps:

In the first step, choose your source type. I selected "Link" and entered the Wikipedia URL:

URL Entry

Step 2: Organization

The second step lets you select which notebooks should contain this source. Since we're already in a notebook, it's pre-selected:

Organization Step

Step 3: Processing

The final step is where the magic happens. You can choose AI transformations to apply:

  • Dense Summary: Creates a rich, deep summary (enabled by default)
  • Key Insights: Extracts important insights and actionable items
  • Paper Analysis: For technical/scientific papers
  • Reflection Questions: Generates questions to explore the content
  • Simple Summary: A concise overview
  • Table of Contents: Describes document topics

I kept the default "Dense Summary" transformation and enabled embedding for search, which allows the content to be found in vector searches and AI queries.

Processing and Results

After clicking "Done", the source is queued for background processing:

Source Queued

The system extracts content from the web page, processes it, and makes it available for AI interactions. Within seconds, the processing completes:

Processing Complete

The Wikipedia article was successfully processed:

  • Title extracted: "Artificial intelligence"
  • Content size: 525.2K characters
  • Token count: 143.9K tokens
  • Ready for chat: The content is now available in the chat interface

Configuring AI Models

Before using the chat feature, you need to configure the AI models. Navigate to the Models page and add your Gemini model:

Add Model Dialog

I added "gemini-2.5-flash" as the language model:

Model Added

Then I configured the default model assignments:

  • Chat Model: gemini-2.5-flash
  • Transformation Model: gemini-2.5-flash
  • Large Context Model: gemini-2.5-flash (critical for processing large documents)

Models Configured

The Large Context Model is essential for chat functionality with large sources. Make sure it's set:

Large Context Model Set

Chat Interface

With the source processed and models configured, the chat interface is ready. I asked a question about the content:

Question Typed

The AI responded with a comprehensive answer based on the processed Wikipedia article:

Chat Working Success

The chat successfully:

  • Processed the question
  • Retrieved relevant context from the 143.9K token source
  • Generated a detailed response with citations
  • Provided structured information about AI applications, history, and key concepts

Why Open Notebook?

Open Notebook offers several advantages over proprietary solutions:

  1. Privacy: Your data stays on your infrastructure
  2. Flexibility: Choose from 16+ AI providers (OpenAI, Anthropic, Gemini, Ollama, etc.)
  3. Control: Full control over transformations and processing
  4. Open Source: Customize and extend as needed
  5. Cost: Pay only for AI usage, no subscription fees

Conclusion

Open Notebook is a powerful, flexible alternative for AI-powered research. The setup is straightforward, the interface is intuitive, and the processing pipeline works seamlessly. With support for multiple AI providers and a clean, modern UI, it's an excellent choice for anyone who wants more control over their research workflow.

The application successfully processed a Wikipedia article, extracted over 500K characters of content, configured Gemini models correctly, and made it immediately available for AI-powered analysis—all through a simple, user-friendly interface.

Whether you're a researcher, student, or knowledge worker, Open Notebook provides the tools you need to organize, process, and interact with your content using AI.

Top comments (0)