Try it live:
What I Built
Every developer needs a portfolio. And every developer hates building one.
You spend hours picking a template, manually copying project descriptions from GitHub, formatting your skills section, and writing an "About Me" that doesn't sound like a chatbot wrote it. Then you do it again six months later because your old one is stale.
I got tired of that cycle. So I built Lamefolio AI — an MCP-powered portfolio engine that fetches your entire GitHub profile, optionally ingests your resume (PDF, images — full multimodal), feeds everything through Gemini 2.5 Flash, and outputs a beautifully structured Notion portfolio page. One conversation. One click. Done.
No more copy-pasting. No more stale portfolios. The AI builds it, and you own it in Notion.
What it actually does
- Fetches your GitHub data — repos, languages, READMEs, and project metadata via Octokit REST
- Analyzes your resume (optional) — full multimodal parsing via Gemini. PDFs, images, even photos of printed resume
- Generates a semantic portfolio schema — Gemini takes the raw data and creates a structured JSON: hero section, skills breakdown, project cards, experience timeline, achievements
- Transforms to Notion blocks — a dedicated Transformer service converts the schema into Notion's block format, with multiple template styles
- Publishes to Notion — creates the page, appends blocks in batches, sets cover images, and delivers a live URL
- AI Chat for live editing — after generation, chat with the AI to search, append, delete, or update content on your portfolio page using function callinge
Key Features
- 🪄 5-Service Pipeline — GitHub → AI → Transformer → Notion, orchestrated by a central service
- 🤖 Gemini 2.5 Flash — portfolio schema generation, resume analysis (multimodal), chat with function calling, dev docs generation
- 📋 MCP Server — full Model Context Protocol server exposing generate_portfolio and generate_docs as callable tools via stdio transport
- 🎨 4 Template Styles — Default, Designer-Minimal, Hacker-Dark, Dev-Pro—each with distinct visual identity
- 💬 AI Chat with Function Calling — Gemini calls Notion tools (search, fetch, append, delete, update, comment) autonomously in a multi-turn loop
- 📄 Multimodal Resume Ingestion — upload PDFs or images; Gemini extracts structured career data and persists it to your profile
- 🔐 OAuth Integration — both GitHub and Notion OAuth flows with dynamic redirect URI detection for dev/production parity
- 🛒 Template Marketplace — browse and apply premium portfolio templates
- 💳 Billing System — credit-based generation with Razorpay integration
- 📊 Dynamic Dashboard — track credits, view analytics, manage portfolios
The architecture follows a service-oriented pipeline where each service does exactly one thing. The Orchestrator Service is the central coordinator — it chains GitHub data fetching, AI schema generation, block transformation, and Notion publishing into a single sequential pipeline.
The critical design decision: Gemini is only used where creativity matters (schema generation, resume parsing, chat). Everything else — data fetching, block transformation, Notion API calls — is deterministic. Zero token overhead, zero hallucination risk.
Architecture Deep-Dive
Why a 5-service architecture?
I could've built one monolithic service that does everything. But that's a recipe for:
Burnt tokens — fetching GitHub data doesn't need an LLM
Hallucinated URLs — the Notion publishing step should never make things up
Debugging nightmares — which part of the monolith failed?
Instead, each service is a specialist:
Four steps, each handled by a dedicated service. The Orchestrator chains them together and handles error propagation.
GitHub Service: Deterministic Data, Zero LLM
The GitHub service calls Octokit REST directly — no agent reasoning needed:
One REST call per repo pulls metadata + deep data. The response goes straight into the AI pipeline. No LLM in the loop — this is pure data fetching.
Gemini AI: The Creative Engine
This is where the LLM earns its keep. The AI service has four distinct responsibilities:
- Portfolio Schema Generation — takes raw GitHub + resume data and produces structured JSON
- Resume Analysis — full multimodal parsing of PDFs/images into career data
- Chat with Function Calling — multi-turn conversation with autonomous Notion tool use
- Dev Docs Generation — creates documentation from repo READMEs
The function calling loop in the Orchestrator's (getChatResponsemethod) is the most interesting part — Gemini decides which Notion tools to call, the Orchestrator executes them, sends results back, and Gemini decides what to do next. Up to 5 iterations per conversation turn.
Transformer: Schema → Notion Blocks
The Transformer converts the AI-generated schema into Notion's block API format. It supports 4 template styles, each with a distinct visual identity:
How I Used Notion & MCP
This is the part I'm most excited about. Lamefolio AI integrates with Notion in two complementary ways: through the official MCP server for extensibility, and through the Notion Client SDK for the core pipeline.
- MCP Server via @modelcontextprotocol/sdk
The MCP server wraps the entire Orchestrator pipeline as callable tools. Any MCP-compatible client (Claude Desktop, Cursor, custom agents) can trigger portfolio generation:
Two tools exposed: generate_portfolio (GitHub handle → full Notion portfolio) and generate_docs (repo URL → technical documentation). The MCP layer is a thin wrapper around the Orchestrator — all the heavy lifting happens in the service pipeline underneath.
- AI Chat with Notion Function Calling The real power is in the chat interface. After generating a portfolio, users can chat with Gemini to live-edit their Notion pages. Gemini has access to 6 Notion tools:
The Orchestrator runs a multi-turn function calling loop:
Gemini decides which tools to call. The Orchestrator executes them, sends results back, and Gemini decides what to do next — up to 5 iterations per turn. It's autonomous Notion editing through natural conversation.
- Notion Client SDK — The Core Pipeline For the portfolio generation pipeline itself, we use @notionhq/client directly:

The 500ms delay between batches isn't just for rate limits — it makes the frontend feel like the AI is "building" the portfolio in real-time. Small UX detail, big perceived quality difference.
Lessons Learned
Multimodal resume parsing is surprisingly reliable
I expected Gemini to struggle with varied resume formats — fancy PDFs, scanned images, two-column layouts. It handles all of them. The key was giving it a strict JSON output schema with clear field definitions. Even blurry phone photos of printed resumes get parsed correctly.
Function calling beats prompt engineering for Notion edits
My first attempt at the chat feature used prompt engineering — telling Gemini to output structured edit commands that I'd parse. It was fragile and unpredictable. Switching to native function calling was night and day. Gemini decides which Notion tools to call, and the results go back into the conversation. No parsing, no regex, no hoping the model follows the format.
Template engines are more valuable than you'd think
I almost skipped templates and went with a single layout. But adding the template system (default, designer-minimal, hacker-dark, dev-pro) was worth every line of code. Users don't just want a portfolio — they want their portfolio. The hacker-dark template renders the entire portfolio as JSON code blocks. Someone will love that.
Batched block appending is essential
Notion's API has strict rate limits (3 requests/second). Sending 50+ blocks one-at-a-time will get you throttled immediately. Batching in groups of 10 with a 500ms delay keeps you safely under limits and creates a satisfying "building" animation on the frontend.
Notion doesn't support real-time streaming — and that changed my UX plans
I originally wanted a typewriter effect — the portfolio building itself in real-time on the Notion page as the user watches, block by block, with live character-by-character rendering. It would've been the killer demo moment.
Notion's API doesn't support this at all. There are no WebSocket endpoints, no SSE-based push, no streaming block update APIs. The API is purely REST: you write blocks, you read blocks. That's it. The SSE/Streamable HTTP transports mentioned in some docs are specifically for MCP server communication (how an MCP client talks to an MCP server), not for real-time Notion content streaming.
To simulate anything close to a typewriter effect, you'd have to repeatedly call the Update a block endpoint yourself — appending one character at a time to a block's rich_text. At Notion's rate limit of 3 req/s, a 500-character paragraph would take ~2.7 minutes of API calls. Not exactly "real-time."
So I pivoted: instead of streaming to Notion, I stream the frontend experience. The batched block appending with 500ms delays creates the illusion of the AI building the portfolio progressively, and the frontend shows a premium loading skeleton with a real-time progress bar while the backend does its work. The end result feels just as magical — the user just doesn't see individual characters appearing on the Notion page. Sometimes the API's limitations push you toward a better UX anyway.
OAuth redirect URIs are a deployment landmine
The number of hours I spent debugging OAuth failures across local dev vs. Vercel production was embarrassing. The fix: a
@getRedirectUri
function that dynamically detects the frontend origin from request headers. No more hardcoded URLs, no more broken deploys.
Tech Stack
Built with Gemini, Notion API, Model Context Protocol, and an unhealthy amount of TypeScript. If you've ever procrastinated on building a portfolio, let the AI do it for you.
GitHub: Yatharth4005/Lamefolio-ai











Top comments (0)