DEV Community

RoTSL
RoTSL

Posted on • Edited on

Resume Tailor

Notion MCP Challenge Submission 🧠

This is a submission for the Notion MCP Challenge

What I Built

Resume Tailor takes a job posting and your resume, then outputs a tailored resume and cover letter as PDFs. The whole thing runs in your browser. No sign-up, no server, no data stored anywhere except your Notion workspace if you want it there.

You pick Claude or Gemini (Gemini has a free tier, no credit card), paste or upload the job description, upload your resume, and click go. Two PDFs come out the other side.

It also runs as a local Flask app with more features (DOCX support, job URL fetching, richer PDFs) and a CLI if that's your thing.

The one rule I actually cared about

The AI is not allowed to make things up. That sounds obvious but it's easy to get wrong. The system prompt on every single call says: you may reorder and reword existing content, you may use keywords from the job description if they honestly describe something the candidate already did, but you cannot add skills, invent metrics, or fabricate roles. If the job asks for five years of Kubernetes experience and the resume doesn't mention Kubernetes, that gap stays in the output.

I've seen other resume tools confidently add skills the user never had. I didn't want to build that.


How Notion MCP works

The Notion integration reads job descriptions from Notion pages and logs every run's output back. If you track jobs in Notion, pass the page ID directly instead of copy-pasting. The system reads the page via MCP.

After each run, two databases get entries. A Job Applications table tracks company, role, date, and a snippet. A linked Outputs database stores the actual resume and cover letter text as readable blocks. A few weeks in, you have every application: what you sent and what they asked for.

I also included .mcp.json for the official @notionhq/notion-mcp-server. Claude Desktop and Cursor pick it up, letting you ask Claude things like "which applications are pending?" or "draft a follow-up for the engineering role."

The Notion API breaks if you write to a property that doesn't exist. Early versions failed when someone's title column wasn't "Name". The fix: introspect the database first, find the actual title property, and put everything else (status, date, company) in the page body as blocks instead of database properties. Works now regardless of configuration.

def _get_title_property_name(db_id):
    db = call_notion_mcp("API-retrieve-a-database", {"database_id": db_id})
    for name, data in db.get("properties", {}).items():
        if data.get("type") == "title":
            return name
    return "Name"
Enter fullscreen mode Exit fullscreen mode

The refactor (late 2024): Moved from the Notion SDK to a Python MCP client. All calls now route through src/mcp_notion_client.py, which spawns the Node.js MCP server and communicates via stdio. Same behavior, but now the operations flow through MCP like the .mcp.json config intended. The MCP server is launched on-demandβ€”no persistent processβ€”so it's transparent to the user.


Video demo

Resume Tailor Demo


Show us the code

GitHub: https://github.com/rotsl/resume-tailor

Live demo: https://rotsl.github.io/resume-tailor

How it's structured

resume-tailor/
β”œβ”€β”€ docs/index.html                   ← the GitHub Pages app, fully self-contained
β”œβ”€β”€ app.py                            ← local Flask server
β”œβ”€β”€ main.py                           ← CLI
β”œβ”€β”€ instruct.md                       ← formatting rules injected into every prompt
β”œβ”€β”€ .mcp.json                         ← Notion MCP server config
β”œβ”€β”€ .github/workflows/deploy.yml      ← deploys docs/ to GitHub Pages on push
β”œβ”€β”€ scripts/
β”‚   └── setup_notion_databases.py     ← creates the Notion DBs via MCP, writes IDs to .env
└── src/
    β”œβ”€β”€ tailor.py                     ← AI engine, supports Claude and Gemini
    β”œβ”€β”€ parser.py                     ← PDF / DOCX / text extraction
    β”œβ”€β”€ pdf_generator.py              ← PDF output via ReportLab
    β”œβ”€β”€ web_context.py                ← fetches company context from the web
    β”œβ”€β”€ mcp_notion_client.py          ← Python MCP client for Notion operations
    └── notion_integration.py         ← high-level Notion read/write (uses MCP)
Enter fullscreen mode Exit fullscreen mode

Supporting two AI providers

src/tailor.py has a single tailor_resume() function that accepts a provider, model, and api_key argument. The same prompts go to both. The browser version calls the APIs directly via fetch(); the local version uses the Python SDKs.

# Claude
tailored = tailor_resume(
    resume, job_description,
    provider="claude",
    model="claude-sonnet-4-6",
    api_key="sk-ant-..."
)

# Gemini free tier
tailored = tailor_resume(
    resume, job_description,
    provider="gemini",
    model="gemini-2.5-flash",
    api_key="AIza..."
)
Enter fullscreen mode Exit fullscreen mode

When no key is passed, it falls back to environment variables, so the CLI reads from .env without asking every time.

The prompt structure

Two layers. The system prompt sets the hard rules (no fabrication, no adding skills). The user prompt gives the model the original resume, the job description, and any web context about the company as clearly labelled separate sections.

ABSOLUTE RULES β€” NEVER VIOLATE:
1. You may ONLY use information that exists in the candidate's original resume.
2. Do NOT invent, embellish, or assume any experience, skills, metrics, or facts.
3. You MAY reorder, reword, and emphasize existing content.
4. Mirror keywords from the job description only where they truthfully apply.
5. If the candidate lacks a required skill, do NOT add it. Leave it absent.
Enter fullscreen mode Exit fullscreen mode

The cover letter call gets both the original resume and the already-tailored resume, so it can see exactly what was kept and what was cut.

Runtime config using instruct.md

Formatting rules live in instruct.md and get injected into every prompt at call time. Swap the file out and the output changes β€” no code edits. Someone who wants a one-page resume with a specific section order can describe that there. Someone applying to academic roles can put a different set of rules in.

The GitHub Pages version

docs/index.html is the entire app. PDF.js reads uploaded PDFs in the browser, the AI APIs are called directly via fetch, jsPDF builds the output PDFs in memory. The GitHub Actions workflow just copies that one file to Pages on every push to main.

- name: Upload Pages artifact
  uses: actions/upload-pages-artifact@v3
  with:
    path: docs/
Enter fullscreen mode Exit fullscreen mode

No build step, no npm, no bundler. The tradeoff is no Notion logging on the static version, since there's nowhere safe to store the Notion API key client-side.

Notion setup script

python scripts/setup_notion_databases.py YOUR_NOTION_PAGE_ID
Enter fullscreen mode Exit fullscreen mode

Creates both databases via MCP, then writes their IDs into .env automatically. No manual copy-paste needed. The script calls call_notion_mcp("API-create-a-database", {...}) for each databaseβ€”same flow as the app itself.

Quick start

git clone https://github.com/YOUR_USERNAME/resume-tailor.git
cd resume-tailor
python -m venv venv && source venv/bin/activate
pip install -r requirements.txt
cp .env.example .env
# Add GEMINI_API_KEY (free) or ANTHROPIC_API_KEY, plus NOTION_API_KEY

python scripts/setup_notion_databases.py YOUR_NOTION_PAGE_ID

python app.py  # β†’ http://localhost:5000
# or
python main.py tailor --resume resume.pdf --job-url https://...
Enter fullscreen mode Exit fullscreen mode

Stack: Claude / Gemini, Notion MCP (Python mcp client + Node.js server), ReportLab, pdfplumber, jsPDF, PDF.js, Flask.

Top comments (0)