DEV Community

RoTSL
RoTSL

Posted on

Resume Tailor

Notion MCP Challenge Submission 🧠

This is a submission for the Notion MCP Challenge

What I Built

Resume Tailor takes a job posting and your resume, then outputs a tailored resume and cover letter as PDFs. The whole thing runs in your browser. No sign-up, no server, no data stored anywhere except your Notion workspace if you want it there.

You pick Claude or Gemini (Gemini has a free tier, no credit card), paste or upload the job description, upload your resume, and click go. Two PDFs come out the other side.

It also runs as a local Flask app with more features (DOCX support, job URL fetching, richer PDFs) and a CLI if that's your thing.

The one rule I actually cared about

The AI is not allowed to make things up. That sounds obvious but it's easy to get wrong. The system prompt on every single call says: you may reorder and reword existing content, you may use keywords from the job description if they honestly describe something the candidate already did, but you cannot add skills, invent metrics, or fabricate roles. If the job asks for five years of Kubernetes experience and the resume doesn't mention Kubernetes, that gap stays in the output.

I've seen other resume tools confidently add skills the user never had. I didn't want to build that.


How I used Notion MCP

The Notion integration does two things: it reads job descriptions from Notion pages, and it writes every run's output back to Notion.

If you already track jobs in a Notion board, you can feed a page ID directly to the tool instead of copy-pasting the description. The system reads the page content via the MCP server and uses it as the job description.

After each run, two things get created in Notion. A new row goes into a Job Applications database with the company name, role, date, and a snippet of the job description. The full tailored resume and cover letter text go into a linked Outputs database as readable Notion blocks. A few weeks into a job search you have a record of every application: what you sent and what the original job asked for.

The repo also includes a .mcp.json config for the official @notionhq/notion-mcp-server. Claude Desktop and Cursor can pick this up and work with the databases directly, which opens up things like asking Claude to summarize which applications are still pending or draft a follow-up for a specific role.

One thing I ran into: the Notion API fails if you try to write to a property that doesn't exist on a database. The first version broke whenever someone's title column wasn't named exactly "Name". The fix was to call databases.retrieve() before writing, find the actual title property name dynamically, and put everything else (status, date, company) in the page body as paragraph blocks instead of as database properties. It works now regardless of how the database is configured.

def _get_title_property_name(client, db_id):
    db = client.databases.retrieve(database_id=db_id)
    for name, data in db["properties"].items():
        if data["type"] == "title":
            return name
    return "Name"
Enter fullscreen mode Exit fullscreen mode

Video demo

Resume Tailor Demo


Show us the code

GitHub: https://github.com/rotsl/resume-tailor

Live demo: https://rotsl.github.io/resume-tailor

How it's structured

resume-tailor/
β”œβ”€β”€ docs/index.html                   ← the GitHub Pages app, fully self-contained
β”œβ”€β”€ app.py                            ← local Flask server
β”œβ”€β”€ main.py                           ← CLI
β”œβ”€β”€ instruct.md                       ← formatting rules injected into every prompt
β”œβ”€β”€ .mcp.json                         ← Notion MCP server config
β”œβ”€β”€ .github/workflows/deploy.yml      ← deploys docs/ to GitHub Pages on push
β”œβ”€β”€ scripts/
β”‚   └── setup_notion_databases.py     ← creates the Notion DBs, writes IDs to .env
└── src/
    β”œβ”€β”€ tailor.py                     ← AI engine, supports Claude and Gemini
    β”œβ”€β”€ parser.py                     ← PDF / DOCX / text extraction
    β”œβ”€β”€ pdf_generator.py              ← PDF output via ReportLab
    β”œβ”€β”€ web_context.py                ← fetches company context from the web
    └── notion_integration.py         ← Notion MCP read/write
Enter fullscreen mode Exit fullscreen mode

Supporting two AI providers

src/tailor.py has a single tailor_resume() function that accepts a provider, model, and api_key argument. The same prompts go to both. The browser version calls the APIs directly via fetch(); the local version uses the Python SDKs.

# Claude
tailored = tailor_resume(
    resume, job_description,
    provider="claude",
    model="claude-sonnet-4-6",
    api_key="sk-ant-..."
)

# Gemini free tier
tailored = tailor_resume(
    resume, job_description,
    provider="gemini",
    model="gemini-2.5-flash",
    api_key="AIza..."
)
Enter fullscreen mode Exit fullscreen mode

When no key is passed, it falls back to environment variables, so the CLI reads from .env without asking every time.

The prompt structure

Two layers. The system prompt sets the hard rules (no fabrication, no adding skills). The user prompt gives the model the original resume, the job description, and any web context about the company as clearly labelled separate sections.

ABSOLUTE RULES β€” NEVER VIOLATE:
1. You may ONLY use information that exists in the candidate's original resume.
2. Do NOT invent, embellish, or assume any experience, skills, metrics, or facts.
3. You MAY reorder, reword, and emphasize existing content.
4. Mirror keywords from the job description only where they truthfully apply.
5. If the candidate lacks a required skill, do NOT add it. Leave it absent.
Enter fullscreen mode Exit fullscreen mode

The cover letter call gets both the original resume and the already-tailored resume, so it can see exactly what was kept and what was cut.

Runtime config using instruct.md

Formatting rules live in instruct.md and get injected into every prompt at call time. Swap the file out and the output changes β€” no code edits. Someone who wants a one-page resume with a specific section order can describe that there. Someone applying to academic roles can put a different set of rules in.

The GitHub Pages version

docs/index.html is the entire app. PDF.js reads uploaded PDFs in the browser, the AI APIs are called directly via fetch, jsPDF builds the output PDFs in memory. The GitHub Actions workflow just copies that one file to Pages on every push to main.

- name: Upload Pages artifact
  uses: actions/upload-pages-artifact@v3
  with:
    path: docs/
Enter fullscreen mode Exit fullscreen mode

No build step, no npm, no bundler. The tradeoff is no Notion logging on the static version, since there's nowhere safe to store the Notion API key client-side.

Notion setup script

python scripts/setup_notion_databases.py YOUR_NOTION_PAGE_ID
Enter fullscreen mode Exit fullscreen mode

Creates both databases, then writes their IDs into .env automatically. You don't have to copy anything.

Quick start

git clone https://github.com/YOUR_USERNAME/resume-tailor.git
cd resume-tailor
python -m venv venv && source venv/bin/activate
pip install -r requirements.txt
cp .env.example .env
# Add GEMINI_API_KEY (free) or ANTHROPIC_API_KEY, plus NOTION_API_KEY

python scripts/setup_notion_databases.py YOUR_NOTION_PAGE_ID

python app.py  # β†’ http://localhost:5000
# or
python main.py tailor --resume resume.pdf --job-url https://...
Enter fullscreen mode Exit fullscreen mode

Stack: Claude / Gemini, Notion MCP, ReportLab, pdfplumber, jsPDF, PDF.js, Flask.

Top comments (0)