NotebookLM is awesome. Before we jump into the integration, I’m going to give you a quick overview of NotebookLM, in case you’re not familiar with it. But before I drop the overview, here’s an important thing to know:
The following summary was created entirely with Gemini inside Antigravity, which is querying my existing NotebookLM notebook via the NotebookLM MCP server!
If you have no idea what I’m talking about, then don’t worry. It will all become clear very soon.
Here’s the prompt I’ve used to create the summary below:
Based on what you know from my notebook “The Power of NotebookLM”, provide a short summary of what NotebookLM is, its USPs, and some of its killer features, in Dazbo style.
Overview Created by Gemini/MCP/Notebook
And here’s what Gemini responded with:
What is NotebookLM?
NotebookLM is a personalized AI research assistant built on top of Google’s multimodal Gemini models. It’s designed to help you understand, synthesize, and present complex information that’s scattered across all sorts of different formats.
Why is it so special? (The USPs)
- Strict Source Grounding: This is the big one. General-purpose models can hallucinate — and often do when handling niche topics. But NotebookLM’s answers are grounded only in the documents you provide. Every claim it makes includes an inline citation linking directly back to the original passage in your sources. It’s exactly what you need when you have a low tolerance for AI hallucinations.
- A Massive 1 Million Token Context Window: It can analyze roughly 700,000 words at a single time. And the best part? This full 1-million token context window is available to everyone, even on the highly generous free tier.
- Extensive Multimodal Support: It handles an array of inputs. PDFs, Google Docs, Sheets, Word documents, images, web URLs, and even YouTube videos are all supported out of the box.
The Killer Features (Where it really shines)
- Deep Research Agents: NotebookLM can actively search the web to find new sources for you. It builds a research plan, scans hundreds of sites, and generates a comprehensive, heavily cited 5-page report in minutes — effectively eliminating the manual grunt work of initial literature reviews.
- Audio and Video Overviews: You can instantly turn dense, boring source materials into an engaging, two-person podcast-style conversation! You can customize these Audio Overviews to focus on specific topics or even debate your work. Plus, it offers Video Overviews — narrated slides where the AI generates visuals and pulls diagrams and quotes from your documents to explain complex concepts visually.
- AI Slide Decks and Infographics: Powered by Nano Banana, NotebookLM can automatically transform your research into professional, presentation-ready slide decks and infographics. It essentially turns the platform into a complete “research-to-presentation” pipeline.
- Upgraded Studio Output: NotebookLM doesn’t just answer questions; it actively organizes your content into actionable formats — like Briefing Documents, Study Guides, FAQs, and Mind Maps. Through the Studio panel, you can generate multiple formats.
Speaking to Your Notebooks from Your AI Tools
I live in tools like Google Antigravity and Gemini CLI. Wouldn’t it be great if, from these tools, I could:
- Query my existing NotebookLM notebooks
- Create new notebooks, e.g. based on research I’m doing in Gemini CLI
- Perform research by combining my notebooks
Well, you can. And in a minute, I’ll show you how.
Recap: MCP
The Model Context Protocol (MCP) is an open standard that allows AI models and agents to safely and easily interact with external tools, APIs, and data. Think of it as the universal adapter that allows models and agents to find and execute the tools it has access to.
TL;DR:
Integrating LLMs used to mean writing custom, brittle API connections for every single data source — a classic M*N maintenance nightmare. MCP solves this. Instead of hard-coding the plumbing, the LLM just interprets your natural language requests and figures out which tools to use on the fly. You ask it to do something, the LLM tells the MCP client what tool it needs, and the client routes it to the right server to grab the data or run the action.
Because MCP creates a universal standard — often described as the ”USB-C for AI applications” — it unlocks massive off-the-shelf reusability. Developers can build an MCP server once, and any MCP-compatible AI client (like Gemini CLI, Antigravity, or your own agents) can instantly connect to it.
NotebookLM MCP Server
I wondered: “Is there an existing NotebookLM MCP server I can use?”
Of course there is! A very brief search led me to https://github.com/jacob-bd/notebooklm-mcp-cli. And it’s clearly very popular.
Setup
I’ll show you how I got it working. It only took a couple of minutes. Note that I’m using WSL Ubuntu, which creates the need for an extra step.
From your terminal:
# If you're running in Linux or WSL, make sure you can launch Chrome or Chromium, e.g.
sudo apt install chromium-browser
# Now install NotebookLM MCP CLI
uv tool install notebooklm-mcp-cli
Let’s check the NotebookLM CLI is working by running the nlm command:
So far, so good! Next, I’ll authenticate with nlm so that the CLI/MCP is able to access my own notebooks:
nlm login
Great, we’re authenticated.
Finally, let’s add the MCP configuration to our tool of choice by updating our MCP configuration. E.g.
- In Antigravity, you’ll add this to
.gemini/antigravity/mcp_config.json - In Gemini CLI, add it to
.gemini/settings.json
This is the configuration we need to add:
"mcpServers": {
// other MCP servers
"notebooklm-mcp": {
"command": "uvx",
"args": ["--from", "notebooklm-mcp-cli", "notebooklm-mcp"]
}
}
Try It Out
If I now run /mcp from Gemini CLI, I can see it contains this in the output:
So now, using natural language commands, we can do things like:
- List notebooks.
- Create and delete notebooks.
- Query notebooks.
- Invite collaborators by email.
- Add sources.
- Perform deep research on a topic.
- Create studio artifacts, such as podcasts, videos, slide decks, infographics, mindmaps, and more.
- Download NotebookLM artifacts, e.g. audio podcasts as MP3, videos as MP4, reports as markdown, slide decks as PDF or PPTX, infographics as PNG, and so on.
Let’s try some natural language queries in Google Antigravity.
What notebooklm notebooks do I have?
What tools do you have for working with notebookLM?
Working with my “Power of NotebookLM Notebook”
I wanted to take this blog and add it to my existing notebook about NotebookLM. Yeah, I know it’s a bit meta.
First, let’s look at the sources that make up this notebook:
So I created a PDF of this blog (whilst I’m still writing it), added to to the root of my Antigravity workspace, and then gave this prompt to the Gemini agent:
Please add my “Integrate NotebookLM” PDF to my “Power of NotebookLM” notebook.
The actual PDF was called “Integrate NotebookLM with Gemini CLI, Google Antigravity or Other Agents with MCP.pdf”, but of course, Gemini is easily able to work out which file I’m talking about.
That was fast! Let’s see if it worked…
Et voilà! The new source has been added. That was easy.
Now, still in Antigravity, let’s create an infographic on this topic. Here’s my prompt:
Create an infographic in that notebook, showing how we can use NotebookLM MCP to integrate agentic clients (including Gemini CLI, Antigravity, or other agents) to your NotebookLM notebooks. It should have a modern look-and-feel with white background.
The Antigravity Gemini agent immediately determines that the studio_create tool needs to be used. Here’s the output in Antigravity:
Of course, the infographic generation takes a while and is generated by NotebookLM asynchronously. After a couple of minutes, the infographic has been created, and is also saved directly to my Antigravity workspace!
And here’s the infographic:
I like it. I'll use it as the header image for this blog.
Wrap-Up
Let’s wrap up this blog by getting NotebookLM to do it, once again from the agent inside Antigravity. Here’s my prompt:
Based on the “Integrate NotebookLM” source in my notebook, create a wrap-up in Dazbo style.
The response was pretty good. And here it is, only slightly tweaked and shortened by me:
Here’s the bottom line. Historically, integrating an incredibly powerful research engine like NotebookLM into your own custom AI agents required a lot of work. But with the Model Context Protocol (MCP) — which is essentially the ultimate “USB-C for AI applications” — all of that pain just vanishes.
By dropping the NotebookLM MCP server into your favourite AI environment (whether that’s Gemini CLI, Google Antigravity, or your own custom agent), you instantly unlock the ability to control NotebookLM using nothing but natural language.
Think about it. You can ask your agent to spin up a new notebook, cross-reference data across hundreds of sources, dynamically search the web for new references, or even kick off the generation of a podcast or a slide deck — all without ever leaving your command line or your IDE. It completely transforms NotebookLM from a brilliant standalone web app into a robust, heavily cited backend research service that powers your autonomous agents.
Before You Go
- Please share this with anyone that you think will be interested. It might help them, and it really helps me!
- Please give me 50 claps! (Just hold down the clap button.)
- Feel free to leave a comment 💬.
- Follow and subscribe, so you don’t miss my content.
Useful Links and References
- NotebookLM
- NotebookLM is Google’s INSANELY COOL Personal AI Research Assistant
- Google Antigravity
- Gemini CLI
- NotebookLM MCP CLI













Top comments (0)