DEV Community

Adetayo Lasisi
Adetayo Lasisi

Posted on

createthings — A Creative System Built on Notion MCP

Notion MCP Challenge Submission 🧠

This is a submission for the Notion MCP Challenge

What I Built

There is a folder on almost every creative's computer.

Sometimes it is a bookmark list. Sometimes it is a Pinterest board, a Notion page, a camera roll full of screenshots, or a browser tab that has been open for six weeks. The contents are always the same: things they loved when they found them and have not touched since. Landing pages to be studied. Threads that made them stop scrolling. Interfaces they meant to reverse-engineer. Designs they were going to recreate on the weekend.

The weekend never comes.

I built createthings because I got tired of my own inspiration graveyards.

Also, the more I talked to designers, developers, and content creators, who are my friends and colleagues, the more I realized the graveyard was not the core problem. The core problem was that saving something and acting on it had no connection. There was no bridge between the moment of being inspired and the moment of doing something about it. No thread between the thing that excited you and the work that came out of it. No record of the journey from one to the other.

What I built

createthings is a creative system for designers, developers, and content creators. It lives across two surfaces: a browser extension that captures inspiration from anywhere on the web, and a web app where that inspiration gets analyzed, understood, acted on, and shared with the world.

The core loop is straightforward to describe:

Capture what excites you — a screenshot, a URL, an uploaded image, or even a typed note when the inspiration is more feeling than visual. Analyze it with AI — not just what it looks like but what makes it work, what you can learn from it, what skills it requires. Think through your process in a creative journal tied to every project. Create your own version with an AI-generated brief as your starting point. Publish it to your connected social platforms directly from the app with platform-specific captions and proper credit to the original creator. Share your full creative journey via an auto-generated public portfolio page.

Notion is the backbone of all of it.

I made a deliberate decision early in the build. I did not want Notion to be a sync target or a convenient place to dump data. I wanted it to be the actual brain of the system, the place where everything lives, where the AI pipeline is triggered, where results are written, where the publish queue is managed, and where the portfolio reads from. Every piece of inspiration lives in Notion.

Video Demo

User Flow

Show us the code

sv

Everything you need to build a Svelte project, powered by sv.

Creating a project

If you're seeing this, you've probably already done this step. Congrats!

# create a new project
npx sv create my-app
Enter fullscreen mode Exit fullscreen mode

To recreate this project with the same configuration:

# recreate this project
npx sv@0.12.8 create --template minimal --types ts --add prettier eslint vitest="usages:unit,component" playwright tailwindcss="plugins:none" sveltekit-adapter="adapter:auto" mcp="ide:vscode+setup:remote" --install npm ./
Enter fullscreen mode Exit fullscreen mode

Developing

Once you've created a project and installed dependencies with npm install (or pnpm install or yarn), start a development server:

npm run dev

# or start the server and open the app in a new browser tab
npm run dev -- --open
Enter fullscreen mode Exit fullscreen mode

Building

To create a production version of your app:

npm run build
Enter fullscreen mode Exit fullscreen mode

You can preview the production build with npm run preview.

To deploy your app, you…

Who It Is For

createthings is built for three types of creators, and the experience adapts to each:

Designers can get a color palette extraction, typography identification, layout and composition breakdown, component analysis, and a Figma export that sends any captured screenshot directly to their Figma workspace as a named frame.

Developers get UI architecture hints, pattern recognition, likely tech stack indicators, and component structure breakdown, the things you want to know when you see an interface you admire and want to understand how it was built.

Content creators get hook analysis, tone breakdown, narrative structure identification, and an explanation of why a piece likely performed well — the things that turn a saved post into a teachable moment.

These three types of creators serve as the starting point, as i still intend to expand to different types of creatives, allowing it to be useful to everyone and anyone

The Feature Set

The Feature Set

Feature What It Does
Browser extension Captures via URL, screenshot (full page or selected area), image upload, URL paste, or typed note
AI analysis Visual and content breakdown — colors, typography, layout, mood, components, attribution
Learning roadmap AI-generated skill path from the analysis — what you need to learn to build something like this
Creative journal Per-project thought space with AI reflection prompts
Publish queue Platform-specific captions drafted by AI — direct share to Twitter/X, LinkedIn, Reddit; one-click copy for Instagram
Remix tagging Inspired by / Remixed from / Recreated — ethical attribution built into every publish
Figma export Sends captured screenshots to Figma as named frames
Portfolio page Auto-generated public page from Notion data, shareable via single link
Stale reminders Smart nudges for inspiration that has been sitting untouched past a threshold
Weekly digest Summary of what was saved, created, published, and learned

The Technical Architecture

Stack: SvelteKit + Svelte 5 (web app + API), Manifest V3 browser extension (Chrome/Firefox/Edge/Safari), Notion as the database layer (six linked databases), Uploadcare for image uploads.

AI: Gemini 2.0 Flash for visual analysis with a fallback chain (→ gemini-2.0-flash-lite → gemini-1.5-flash-8b → OpenRouter free vision → Groq text-only). Groq (llama-3.3-70b-versatile) handles all text generation, roadmaps, captions, and note analysis.

Data flow:

Extension saves a new entry to Notion (status: Pending) — that's all it does.
A Notion database automation fires a webhook to /api/webhook/notion.
The server routes by capture type: images/URLs go to Gemini (with node-vibrant extracting colors first); notes go straight to Groq.
Results are written back to Notion via the MCP adapter; status updates to Ready.

AI Pipeline

For publishing: captions surface in the Publish Queue. Twitter/LinkedIn/Reddit open compose URLs pre-filled; Instagram copies to clipboard. Clicking Mark Published updates the Notion entry.

Publish Pipeline

The Six Databases

On first connect, all 6 Notion databases are created in one pass to get their IDs, then a second pass links the relations between them — because you can't create a relation to a database that doesn't exist yet.

Database Purpose
Inspiration Vault Every saved piece — the source of truth
Analysis Results AI breakdown linked to each inspiration
Roadmaps Learning paths generated from analyses
Projects Work created — linked to inspirations
Publish Queue Drafted and approved social posts
User Profile Preferences, platform connections, Creative DNA

Technology Choices

AI: Gemini 2.0 Flash for vision (generous free tier, reliable structured output) with a 4-model fallback chain so the pipeline survives rate limits. Groq for all text generation because speed matters llama-3.3-70b-versatile keeps roadmap generation under 15 seconds.

Social publishing: Compose URLs instead of OAuth integrations — Twitter, LinkedIn, and Reddit accept pre-filled content as URL params, no API approval needed. Instagram has no compose URL so it falls back to clipboard copy. Ship with what works now, add direct API integrations when you have approvals.

How I Used Notion MCP

I made a rule early in the build: Notion should be the brain of the system, not a sync target. Every intelligent write-back goes through an MCP adapter layer — a set of functions in src/lib/server/notion/mcp.ts that wrap the Notion SDK calls for all agent actions.

The distinction between the REST API layer and the MCP layer is intentional. The REST API handles database operations: creating structure, querying records, and managing schemas. The MCP layer handles agent actions: an AI system reading context from a workspace and writing back purposefully — the same pattern a person working inside Notion would follow.

The Specific Operations

writeAnalysisResults — called after every analysis completes. Creates a page in the Analysis Results database with the full breakdown — color palette, typography notes, layout analysis, mood tags, component identification, attribution — and links it back to the source inspiration entry.

createRoadmap — called when a user requests a roadmap. Creates a structured Notion page in the Roadmaps database — not just a database row, but a full page with sections, skill descriptions, milestone checkboxes, and resource links. The page is created with the correct parent relation to the source inspiration, so the connection is permanent and navigable.

writePublishDrafts — called by the caption generation pipeline. Writes platform-specific captions to the Publish Queue entry — Twitter, LinkedIn, Instagram, Reddit — all with the credit line assembled from the stored attribution data.

updateInspirationStatus — moves entries through their lifecycle: Pending → Ready → Active → Stale → Archived.

Why a Dedicated MCP Layer

This is the question worth answering directly.

When the AI system writes results back to Notion, it is not inserting records — it is making decisions: what to name the page, how to structure the sections, which properties to set, what relations to create, how to format the skill descriptions so they are useful when the user opens the page later. That is an agent action. The MCP layer is the right abstraction for it. The REST API would require pre-specifying every structural decision in code. The MCP layer lets those decisions happen in context.

Having a clean adapter layer also makes the path to the hosted Notion MCP remote server straightforward. Every call goes through one place. The connection is established once per user session with their OAuth token and reused across all operations.

What I Learned

The hardest part was not the AI integration. It was the database setup order.

Notion relations between databases require both databases to exist before you can create the relation. That sounds obvious until you are writing the setup script at 1am and realize you cannot add the relation from Inspiration Vault to Analysis Results until Analysis Results exists — but you also cannot add it to a database you have not created yet. The solution was simple once I saw it: create all six databases first, collect their IDs, then make a second pass to patch the relations in. Two API calls where I wanted one. Not elegant but reliable.

The social API reality was the other hard lesson. I knew Instagram Graph API required Facebook app approval. I knew Twitter/X v2 could take time. What I underestimated was how much value there is in the compose URL approach first — x.com/intent/post, linkedin.com/shareArticle, reddit.com/submit. The user sees exactly what they are about to post. There is no abstraction between the caption and the publish. Add the server-side API integrations when you have time and approvals; start with the compose URLs and let the user be in control.

The thing that surprised me most was the note capture.

I almost cut it three separate times. It felt like scope creep. A text input in a browser extension — what problem does that solve that a notes app does not? I kept it because the use case would not leave me alone: what do you do when the inspiration is not a URL or an image? What do you do when you are in a meeting and you think "I want to make something that feels like the opposite of corporate" and you want to capture that before the meeting ends and you forget?

I also learned that the creative journal resonates more with experienced creators than with beginners. Beginners want to capture and analyze. Experienced creators want to document their process because they know the process is the portfolio, not just the output. That insight is shaping how I think about onboarding — meeting users where they are rather than assuming everyone is ready for the same features at the same time.

Top comments (0)