NotebookLM Watermark Removal API: Developer Guide & Examples
NotebookLM Studio provides a REST API for programmatic watermark removal from NotebookLM exports. This guide covers everything you need to integrate watermark removal into your application or workflow.
Why Use the API?
The web UI is great for one-off files. The API is for everything else:
- Document pipelines: Auto-process new NotebookLM exports as they're created
- Multi-tenant apps: Let your users upload NotebookLM files; process them server-side
- Batch automation: Nightly jobs to clean large export archives
- CI/CD integration: Process documentation exports as part of a build pipeline
Getting Started
1. Get Your API Key
Sign up at notebooklmstudio.com (50 free credits, no credit card). Navigate to Dashboard → API Keys → Create New Key.
Your API key starts with nlms_.
2. Base URL
https://api.notebooklmstudio.com/v1
3. Authentication
All requests require the Authorization header:
Authorization: Bearer nlms_YOUR_API_KEY
Core Endpoints
Remove Watermark (Single File)
POST /v1/remove
POST https://api.notebooklmstudio.com/v1/remove
Authorization: Bearer nlms_YOUR_KEY
Content-Type: multipart/form-data
file=@document.pdf
Response (202):
{
"job_id": "job_01HXZ123ABC",
"status": "queued",
"estimated_seconds": 9,
"credits_used": 1
}
Check Job Status
GET /v1/jobs/{jobId}
{
"job_id": "job_01HXZ123ABC",
"status": "completed",
"download_url": "https://cdn.notebooklmstudio.com/output/abc.pdf?token=...",
"expires_at": "2026-02-20T13:00:00Z",
"processing_time_ms": 8742,
"credits_used": 1
}
Status lifecycle: queued → processing → completed | failed
Batch Processing
POST /v1/batch — Up to 50 files in one request.
{
"files": [
{ "source_url": "https://s3.../file1.pdf", "format": "pdf" },
{ "source_url": "https://s3.../video.mp4", "format": "mp4" }
],
"webhook_url": "https://yourapp.com/hooks/nlms"
}
Check Credits
GET /v1/credits
{
"plan": "pro",
"credits_remaining": 492,
"credits_total": 500,
"reset_date": "2026-03-01T00:00:00Z"
}
TypeScript SDK — Full Examples
Installation
npm install notebooklm-studio
# or
pnpm add notebooklm-studio
Initialize Client
import { NotebookLMStudio } from 'notebooklm-studio';
const nlms = new NotebookLMStudio({
apiKey: process.env.NLMS_API_KEY!,
// Optional: timeout, retries, base URL override
timeout: 120_000,
maxRetries: 3,
});
Single File Processing
import { readFileSync, writeFileSync } from 'fs';
async function processFile(inputPath: string, outputPath: string) {
const buffer = readFileSync(inputPath);
const ext = inputPath.split('.').pop() as 'pdf' | 'mp4' | 'png' | 'jpg';
// Submit job
const job = await nlms.remove({
file: buffer,
filename: inputPath.split('/').pop()!,
format: ext,
});
console.log(`Job submitted: ${job.jobId}, ETA: ${job.estimatedSeconds}s`);
// Wait for completion
const result = await nlms.waitForJob(job.jobId, {
pollIntervalMs: 1000,
timeoutMs: 90_000,
});
if (result.status !== 'completed') {
throw new Error(`Job failed: ${result.error}`);
}
// Download and save
await nlms.downloadToFile(result.downloadUrl, outputPath);
console.log(`Done: ${outputPath}`);
}
Next.js Integration (App Router)
// app/api/remove-watermark/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { NotebookLMStudio } from 'notebooklm-studio';
import { auth } from '@/auth'; // your auth solution
const nlms = new NotebookLMStudio({ apiKey: process.env.NLMS_API_KEY! });
export async function POST(req: NextRequest) {
const session = await auth();
if (!session) return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });
const formData = await req.formData();
const file = formData.get('file') as File | null;
if (!file) return NextResponse.json({ error: 'No file' }, { status: 400 });
const buffer = Buffer.from(await file.arrayBuffer());
const ext = file.name.split('.').pop() as any;
const job = await nlms.remove({ file: buffer, filename: file.name, format: ext });
return NextResponse.json({
jobId: job.jobId,
estimatedSeconds: job.estimatedSeconds,
});
}
// app/api/jobs/[jobId]/route.ts
export async function GET(
req: NextRequest,
{ params }: { params: { jobId: string } }
) {
const result = await nlms.getJob(params.jobId);
return NextResponse.json(result);
}
Batch Processing with Error Handling
interface ProcessResult {
input: string;
output?: string;
error?: string;
processingTimeMs?: number;
}
async function batchProcessFiles(
files: Array<{ path: string; s3Key: string }>,
outputDir: string
): Promise<ProcessResult[]> {
const results: ProcessResult[] = [];
// Process in chunks of 50
for (let i = 0; i < files.length; i += 50) {
const chunk = files.slice(i, i + 50);
const batchFiles = chunk.map(f => ({
sourceUrl: `s3://${process.env.S3_BUCKET}/${f.s3Key}`,
format: f.path.split('.').pop() as any,
}));
const batch = await nlms.batch(batchFiles);
console.log(`Batch ${Math.floor(i/50) + 1}: ${batch.batchId}, ETA: ${batch.estimatedSeconds}s`);
const batchResults = await nlms.waitForBatch(batch.batchId, { timeoutMs: 120_000 });
for (let j = 0; j < batchResults.length; j++) {
const br = batchResults[j];
const inputFile = chunk[j];
if (br.status === 'completed') {
const outputPath = `${outputDir}/${inputFile.path.split('/').pop()}`;
await nlms.downloadToFile(br.downloadUrl, outputPath);
results.push({
input: inputFile.path,
output: outputPath,
processingTimeMs: br.processingTimeMs,
});
} else {
results.push({
input: inputFile.path,
error: br.error ?? 'Unknown error',
});
}
}
}
return results;
}
Webhook Integration
// Submit with webhook
const batch = await nlms.batch(files, {
webhookUrl: `${process.env.APP_URL}/api/webhooks/nlms`,
});
// Webhook handler
// app/api/webhooks/nlms/route.ts
import { verifyWebhookSignature } from 'notebooklm-studio/webhooks';
export async function POST(req: NextRequest) {
const body = await req.text();
const signature = req.headers.get('x-nlms-signature')!;
// Verify authenticity
const isValid = verifyWebhookSignature(body, signature, process.env.NLMS_WEBHOOK_SECRET!);
if (!isValid) return NextResponse.json({ error: 'Invalid signature' }, { status: 401 });
const event = JSON.parse(body);
if (event.type === 'batch.completed') {
for (const result of event.results) {
if (result.status === 'completed') {
// Trigger your downstream processing
await notifyUser(result.metadata?.userId, result.downloadUrl);
}
}
}
return NextResponse.json({ received: true });
}
Python SDK — Full Examples
Installation
pip install notebooklm-studio
Basic Usage
import os
from pathlib import Path
from notebooklm_studio import NotebookLMStudio
client = NotebookLMStudio(api_key=os.environ["NLMS_API_KEY"])
def process_file(input_path: str, output_path: str) -> None:
ext = Path(input_path).suffix.lstrip('.')
with open(input_path, 'rb') as f:
job = client.remove(file=f, filename=Path(input_path).name, format=ext)
print(f"Job submitted: {job.job_id}, ETA: {job.estimated_seconds}s")
result = client.wait_for_job(job.job_id, timeout=90)
if result.status == 'completed':
client.download_to_file(result.download_url, output_path)
print(f"Done: {output_path}, took {result.processing_time_ms}ms")
else:
raise RuntimeError(f"Job failed: {result.error}")
Async Processing with asyncio
import asyncio
import os
from notebooklm_studio.async_client import AsyncNotebookLMStudio
async def process_files_concurrently(file_paths: list[str], output_dir: str):
client = AsyncNotebookLMStudio(api_key=os.environ["NLMS_API_KEY"])
os.makedirs(output_dir, exist_ok=True)
async def process_one(path: str):
ext = path.split('.')[-1]
with open(path, 'rb') as f:
job = await client.remove(file=f, filename=path.split('/')[-1], format=ext)
result = await client.wait_for_job(job.job_id, timeout=90)
if result.status == 'completed':
output = f"{output_dir}/{path.split('/')[-1]}"
await client.download_to_file(result.download_url, output)
return {'success': True, 'output': output}
return {'success': False, 'error': result.error}
tasks = [process_one(p) for p in file_paths]
results = await asyncio.gather(*tasks, return_exceptions=True)
return results
# Run
asyncio.run(process_files_concurrently(['doc1.pdf', 'doc2.pdf'], './output'))
Django Integration
# views.py
from django.http import JsonResponse
from django.views.decorators.http import require_POST
from django.contrib.auth.decorators import login_required
from notebooklm_studio import NotebookLMStudio
import os
nlms = NotebookLMStudio(api_key=os.environ["NLMS_API_KEY"])
@login_required
@require_POST
def submit_watermark_removal(request):
uploaded_file = request.FILES.get('file')
if not uploaded_file:
return JsonResponse({'error': 'No file provided'}, status=400)
job = nlms.remove(
file=uploaded_file.read(),
filename=uploaded_file.name,
format=uploaded_file.name.split('.')[-1]
)
return JsonResponse({
'job_id': job.job_id,
'estimated_seconds': job.estimated_seconds
})
def check_job_status(request, job_id):
result = nlms.get_job(job_id)
return JsonResponse({
'status': result.status,
'download_url': result.download_url if result.status == 'completed' else None
})
Rate Limits & Error Handling
| Plan | Rate Limit |
|---|---|
| Free | 10 req/min |
| Pro | 60 req/min |
| Business | 300 req/min |
Handling common errors:
import { NotebookLMStudioError, InsufficientCreditsError, RateLimitError } from 'notebooklm-studio';
try {
const job = await nlms.remove({ file: buffer, filename: 'doc.pdf', format: 'pdf' });
} catch (err) {
if (err instanceof InsufficientCreditsError) {
console.error('Out of credits. Upgrade at notebooklmstudio.com/pricing');
} else if (err instanceof RateLimitError) {
console.error(`Rate limited. Retry after ${err.retryAfterMs}ms`);
await sleep(err.retryAfterMs);
} else if (err instanceof NotebookLMStudioError) {
console.error(`API error ${err.statusCode}: ${err.message}`);
}
}
Frequently Asked Questions
Q: Can I use the API in production for commercial applications?
A: Yes. All tiers, including the free tier, permit commercial use.
Q: Is the API available in the free tier?
A: Yes. API access is included in all tiers, including the free tier (50 credits/month).
Q: How do I keep my API key secure in a Next.js app?
A: Never expose your API key to the browser. Store it in environment variables (.env.local) and only call the NotebookLM Studio API from server-side code (API routes, server components, or server actions).
Q: What's the best way to handle large batches (200+ files)?
A: Split into batches of 50, submit sequentially, use webhooks for completion notification. The SDK's batchProcessFolder utility handles this automatically.
Q: Is there an SDK for languages other than TypeScript and Python?
A: Currently TypeScript and Python are officially supported. The REST API is straightforward to call from any language — see the API reference.
Start Building
→ Get your free API key at notebooklmstudio.com
50 free credits. No credit card. Full API access from day one.
View the full API reference documentation →
Top comments (0)