DEV Community

Cover image for Drop Your Challenge Submission Here - v0

Drop Your Challenge Submission Here - v0

Nikoloz Turazashvili (@axrisi) on March 17, 2026

Built something for a DEV Challenge but feel like not enough people saw it? This post is for exactly that. Sometimes great submissions get buried...
Collapse
 
theycallmeswift profile image
Swift

This is such a cool idea, thanks for starting the discussion!

Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

Receiving such feedback from someone with 8 year club badge is flattering! haha.

Thanks, man :)

Will keep posting new versions as more challenges come!

Collapse
 
highflyer910 profile image
Thea

DevStretch
An installable PWA that interrupts your coding session with dev-themed movement breaks.
Highlights: terminal dark aesthetic, voice guidance, CLI progress bar, stand-up reminders, zero dependencies, and vanilla JS only:)
Submission link
Feedback welcome on anything, but especially curious if the notification flow works for you (it's my current open issue 😅)

Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

Hey! Love the project - the terminal aesthetic and zero-dependency approach is really clean.
I put together a PR that replaces the browser TTS with pre-generated ElevenLabs audio for higher quality voice guidance, while keeping full offline support through the Service Worker cache: PR #2

feat: replace browser TTS with pre-generated ElevenLabs audio #2

Replaces window.speechSynthesis (browser TTS) with high-quality pre-generated ElevenLabs audio files.

  • All 103 voice clips are cached by the Service Worker for full offline support
  • Includes a Node.js generation script (scripts/generate-tts.js) for reproducibility
  • Matches the original voice flow exactly (only exercise 1 gets full description read aloud)
  • Custom audio queue system handles sequential playback and graceful cancellation seamlessly

Would love to hear what you think!

Collapse
 
highflyer910 profile image
Thea

Thank you so much for this, I really appreciate the effort! 🙏
The audio quality is impressive, but I want to keep the app fully lightweight and dependency-free; that was one of the core goals from the start..

Also tbh, the slightly robotic browser TTS fits the terminal aesthetic better than I expected, it feels almost intentional 🤭

So I’ll stick with the native browser APIs for now. Though this is a very cool approach and a great reference for anyone who wants higher-quality audio in their own fork!

Thread Thread
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

No worries. It's up to you :)

The PR didn't add any dependencies, but increased the overall size of the project by 4mb because of the audios being sent to users. And the Vercel free plan for bandwidth is good enough, and I made it to be cached in the browser. So it shouldn't be a big deal either.

But again, all good! You can close that PR if there is no need.

Thread Thread
 
highflyer910 profile image
Thea

Thank you for clarifying! Really appreciate the thought you put into this, especially the caching approach 😊 I’ll keep it in mind

Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

🇬🇪 🇬🇪 🇬🇪

Collapse
 
dannwaneri profile image
Daniel Nwaneri

Built a knowledge evaluator on Cloudflare Workers that scores conversation excerpts using Workers AI and routes them to Notion based on confidence — high signal auto-promotes, ambiguous items land in a Review Queue for human judgment, Claude Desktop reads them back via MCP. The bidirectional loop was the part most submissions missed.
dev.to/dannwaneri/i-built-a-knowle...

Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

Nice one!
Anyone with experience in Cloudflare Workers?

Collapse
 
dannwaneri profile image
Daniel Nwaneri

Yes,been building on Workers for a few years in production. The evaluator was the first time I paired Workers AI with Notion as a judgment surface rather than just storage. Happy to answer anything on the Workers or Workers AI side.

Collapse
 
sushantrahate profile image
Sushant Rahate

Project Title
JS Daily Byte - Daily JavaScript Quotes Telegram Channel

Short Description
JS Daily Byte is an automated Telegram channel that posts one JavaScript concept every day. It uses GitHub Actions as a scheduler, reads quotes from a JSON file, and sends the daily post to Telegram using the Bot API. The whole system runs without any server, database, or hosting.

Submission Link
dev.to/sushantrahate/js-daily-byte...

Feedback I’m Looking For
I’d love feedback on:

  • Suggestions to improve the architecture and automation flow
  • Ideas to make the daily JavaScript content more engaging for developers
Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

I love the simplicity of this project! So there can be many ideas on how to make it "better", but it will become more complex.

I think a couple of things you can make is Open Discussion group
IDiscussion group in telegram

And maybe add more channels to distribute, like X, Mastodon, BSKY.

Collapse
 
sushantrahate profile image
Sushant Rahate

Thanks for the suggestion! I tried to keep the system intentionally simple with GitHub Actions only. ♥️

The discussion group is a really good idea actually. It could turn the channel into more of a learning community. Also thinking about distributing the posts to other platforms later.

Thread Thread
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

Happy you found my suggestions useful :)

I think you could also add some inline buttons under each post that can do something useful. not sure, maybe you can do upvotes or something. or maybe for upvotes you can use emojis. you can create some styled emojis and allow only those to be used for posts. also allow paid reactions if someone wants to support the project.

this is how you limit reactions:

Thread Thread
 
sushantrahate profile image
Sushant Rahate

Thanks! I’ve updated the reactions and decided not to enable paid reactions.

Thread Thread
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

Nice! I dropped one custom emoji in discussion chat if you like it you can add :)

Collapse
 
ben profile image
Ben Halpern

Good call

Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

Thanks, Ben.

I’m thinking of turning this into a recurring series, like “Welcome Thread v1, v2, v3” as new challenges go live. Since DEV only allows 4 tags per post, that also helps keep each thread focused on active challenges and recent projects.

Collapse
 
ben profile image
Ben Halpern

Yeah, I agree something like this :)

Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

Let me start 👇

Here are a few of my recent projects and challenge submissions.

1. Notion Skills Registry

A Notion-based registry designed to manage AI agent skills like packages.

Instead of duplicating prompts and workflows across projects, this system lets you version, organize, and distribute skills from a central workspace. It works well for teams building with MCP and agent frameworks where skills evolve quickly.


2. Gemini-Powered History Narrator

An AI storytelling project that turns historical events into engaging narrated experiences using Gemini.

The post reflects on building the project months ago and shares lessons learned about AI narration, storytelling prompts, and what I would do differently today.


3. Issue Discovery Tool for Open Source Contributors

A tool built to help developers find open-source issues to contribute to faster.

Instead of browsing GitHub endlessly, it surfaces relevant issues and helps contributors discover projects where they can actually make an impact.


Curious to hear what you think.

Now it’s your turn. Drop your challenge submission below 👇

Collapse
 
jacksonkasi profile image
Jackson Kasi

Hey 👋

I’d like to submit my project, TableCraft

🔗 GitHub: github.com/jacksonkasi1/TableCraft

It’s a developer-first tool focused on making data tables more efficient, flexible, and easy to use in real-world apps. Built with a strong focus on performance and usability.

Would love your feedback! 🚀

Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

hey!
Have you submitted the project using the template of the challenge?
Link to challenges: dev.to/challenges

Do it first if you haven't done it. This thread is to get feedback, not to submit projects for challenges :)

Collapse
 
jacksonkasi profile image
Jackson Kasi

Oops! Sorry, I didn’t notice that. Thanks for pointing it out.

Collapse
 
kislay profile image
Kumar Kislay

this is awesome tool, have you listed it on forg.to?

Collapse
 
kenwalger profile image
Ken W Alger • Edited
  • Archival Intelligence: A Forensic Rare Book Auditor
  • I built the Rare Book Intelligence MCP Server, a specialized forensic agent that turns a Notion workspace into an expert appraisal lab. In the world of high-value assets, the difference between a lowercase 'j' and a capital 'J' on a 1925 Gatsby dust jacket represents a $150,000 valuation swing.
  • dev.to/kenwalger/archival-intellig...
  • Any feedback is welcome, but how would you take this to the next level? I have thoughts (and a current blog series) on my expansion, but what about folks here?
Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi) • Edited

1/2
this is unique niche!

One thing I can suggest right away is to turn that prompt you copy pasted to reusable skill and add it everywhere you need it.

Here is example, but I didn't dig deep to see if it fits 100% to the workflow, please double check:

# Forensic Book Audit

## Purpose
Conduct a formal forensic audit for a collectible book by comparing observed copy details against bibliographic ground truth and market data, then update internal records if a material discrepancy is confirmed.

## Inputs
- `book_title`: Title of the book being audited
- `claimed_edition_year`: Claimed edition/year to verify
- `observed_feature`: Specific observed characteristic of the copy
- `expected_feature_field`: Bibliographic feature to validate against
- `catalog_database`: Name of the catalog database
- `bibliography_database`: Name of the master bibliography database
- `market_database`: Name of the market results database
- `audit_log_database`: Name of the audit log database
- `flag_status_value`: Status label to apply if discrepancy is confirmed
- `risk_threshold_note`: Optional explanation of why the discrepancy is material

## Required Data Sources
1. Books Catalog
2. Master Bibliography
3. Market Results
4. Audit Log

## Protocol
1. **Search Catalog**  
   Find the entry for `{{book_title}}` in `{{catalog_database}}`.

2. **Archival Lookup**  
   Search `{{bibliography_database}}` for `{{book_title}}` and `{{claimed_edition_year}}`.  
   Retrieve the bibliographic ground truth for `{{expected_feature_field}}`.

3. **Market Analysis**  
   Search `{{market_database}}` for comparable market results tied to the relevant states, issue points, or variants.  
   Identify the price difference between the correct state and the observed state.  
   Save the most relevant citation link if one exists.

4. **Forensic Collation**  
   Compare:
   - Observation: `{{observed_feature}}`
   - Ground Truth: bibliographic requirement for `{{expected_feature_field}}`

   Determine whether the observation matches or conflicts with the required point.

5. **Execution Rules**
   - If the discrepancy is confirmed:
     - Update the matching record in `{{catalog_database}}`
     - Set its status to `{{flag_status_value}}`
     - Create a new entry in `{{audit_log_database}}`
     - Include:
       - book title
       - claimed edition/year
       - observed feature
       - ground truth
       - discrepancy summary
       - estimated overvaluation risk
       - supporting market reference
   - If no discrepancy is confirmed:
     - Do not change status
     - Record that the copy is consistent with the bibliography if logging is required

6. **Reporting**
   Return a final report in Markdown containing:
   - Exhibit Label
   - Audit Verdict
   - Summary of discrepancy or match
   - Estimated market impact
   - Citation link from market results, if available
   - Notion page IDs or record IDs updated

## Decision Standard
A discrepancy is confirmed only if:
- the observed feature clearly conflicts with the master bibliography, and
- the conflicting point materially affects state, issue, or market value.

## Output Format

### Exhibit Label
`EXHIBIT: {{book_title}} Forensic Audit`

### Report Template
- **Book:** {{book_title}}
- **Claimed Edition/Year:** {{claimed_edition_year}}
- **Observed Feature:** {{observed_feature}}
- **Ground Truth:** {{ground_truth}}
- **Verdict:** {{Confirmed Discrepancy / No Discrepancy}}
- **Catalog Status:** {{Updated to Flagged / No Change}}
- **Estimated Market Impact:** {{value_difference}}
- **Audit Log Entry Created:** {{Yes / No}}
- **Updated Record IDs:** {{record_ids}}
- **Market Citation:** {{citation_link_or_none}}

## Behavior Rules
- Be precise and conservative.
- Do not infer bibliographic facts without evidence from the master bibliography.
- Do not estimate market impact without support from market records.
- If any required source is missing, explicitly report that the audit is incomplete.
- Prefer exact matching on title and edition/year before using fuzzy matching.

## Example Invocation
- `book_title`: The Great Gatsby
- `claimed_edition_year`: 1925
- `observed_feature`: Back jacket has a capital "J" in "Jay Gatsby"
- `expected_feature_field`: dust jacket state point
- `catalog_database`: Books Catalog
- `bibliography_database`: Master Bibliography
- `market_database`: Market Results
- `audit_log_database`: Audit Log
- `flag_status_value`: Flagged
- `risk_threshold_note`: Potential six-figure overvaluation if jacket state is misidentified
Enter fullscreen mode Exit fullscreen mode
Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

2/2
Another thing, from what I can see in the repo, it only works on source csv's, right?
Maybe there are any public APIs to get the data, and AI will do that alongside general analysis, database queries, and Notion updates?

Thread Thread
 
kenwalger profile image
Ken W Alger

In my cursory look of 'rarity state' resources for things like books, coins, etc. I didn't find one that was what I was looking for for this example application. The idea here too is that this idea is extendible beyond just books, right? Postage stamps, coins, sports memorabilia, etc. all have large dollar value items that could have the same approach taken for.

Nice job on the prompt, btw.

Thread Thread
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

I guess that's the idea for your next billion dollar SAAS project to create a service to provide that info for all collectibles? ))

Thread Thread
 
kenwalger profile image
Ken W Alger

That's not a project I personally want to tackle, but it would be interesting.

Collapse
 
astrodeeptej profile image
deeptej

CosmoDex

now you can simply 'speak' to NASA NEO data, thanks to gemini! Check it out here: dev.to/astrodeeptej/fusing-nasa-da...

Collapse
 
jowi00000 profile image
Jowi A • Edited

Hey everyone here's my submission Plante which is an automated greenhouse and gamified learning platform. It even has the little game from the app you can play within the blog 😀 Would love to get feedback on the app and also tips to get better at working with hardware. Thanks!

Collapse
 
vmvenkatesh78 profile image
venkatesh m • Edited

flintwork-token-sync: Designers edit design tokens in a Notion table, an AI agent validates everything and builds CSS. Broken references get caught before they hit code.
Built it with two MCP servers - Notion MCP handles the database, a custom one handles validation and build. Claude orchestrates both.
dev.to/vmvenkatesh78/i-built-an-mc...
Here's the link to a demo Watch the demo →

Would love to know if the dual-MCP architecture makes sense from the writeup or if it needs more explanation.

Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

Hey, just checked the submission. Please include the demo video. I think it is part of the requirements anyway.

It would be great if you could show how this is intended to be used before anyone can give suggestions. :)

Collapse
 
vmvenkatesh78 profile image
venkatesh m • Edited

Hey @axrisi
I've attached a video link to the demo under the section titled Video Demo to the post
Please do let me know if you are unable to access it
Here's the link to it Watch the demo →

Thread Thread
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

I just watched it. Got the point now in full.

We can definitely see it working and really interesting project.

Do you think this could work even better if Figma variables synced into Notion, so designers would not have to maintain both manually?

Feels like that might make the workflow more natural for actual design teams: designers keep changing values in Figma, Notion stays as the structured token/control layer, and your existing validation + build pipeline still does the rest.

Also curious whether Figma variables could map directly to your Notion token schema, or whether Figma MCP could handle that first and then feed the rest of your existing flow.

Thread Thread
 
vmvenkatesh78 profile image
venkatesh m

Yeah that's a good idea actually. Figma variables would map pretty cleanly to the three tiers as
colors as globals, modes for light/dark as semantic tokens.
A Figma MCP could write into Notion and the rest of the pipeline wouldn't need to change since it just reads from Notion regardless of how tokens got there.
Haven't built that part yet but it's definitely something I want to explore. Notion stays as the control layer, Figma just becomes another input.

Thanks for the suggestion.😀

Thread Thread
 
axrisi profile image
Nikoloz Turazashvili (@axrisi) • Edited

Yes! Amazing that you also think it could work.

I just imagined a manager who needs to tell designers they have to maintain the Notion database for all changes they make in Figma. 🫣😁

This could make their life easier, and it would probably be less error-prone too.

Collapse
 
curiousvlxd profile image
Vlad

WellInsightEngine

A prototype platform that turns oil well telemetry into AI-generated operational insights.
It combines TimescaleDB for KPI precomputation with Gemini for reasoning, so AI explains instead of calculating.

It detects things like instability, repeated choke adjustments, and abnormal patterns, and generates actionable insights for engineers.

Submission:

Demo:

Would appreciate feedback on:
• insight quality and usefulness
• architecture decisions
• potential real-world applicability

Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

Interesting to hear about new niches I never thought of!
One question I have - would it be correct to say that if any professional looks at those metrics, they would immediately understand all without the need for Gemini summaries? Or it is more complex and requires digging, and real human hours are being saved by plugging Gemini.

Also, what about cron jobs to generate those reports both periodically and on demand?
Couldn't see if there is any database to save those Gemini reports and look at them after some time has passed. Maybe even export as PDF if anyone would need to show it to people who don't have access to those metrics/reports.

And one more question/suggestion: I think making an MCP for this service would be incredible. It would be an interactive chat with an agent where it gets all metrics, generates a report, and also digs deep into the solution if anything needs to be fixed.

Also, please check the Cloud Run deployment:

Collapse
 
curiousvlxd profile image
Vlad

Good questions, thanks for taking a close look.

Metrics vs Gemini
I checked this with a professional from DTEK. The feedback was that the reasoning is accurate and matches how engineers would interpret these metrics in practice. It's not that metrics are unreadable, but they do require digging and context switching.

Cron jobs
Right now it's more on-demand, but the infrastructure is already in place for scheduled jobs. We can leverage Cloud Tasks to trigger the insight generation endpoint, since the system is built natively on GCP.

Persistence
Insights are stored in the database and linked to the data used during reasoning, so they can be revisited later.
You can also share them: well-insight-engine-frontend-servi...

Export
PDF export makes sense, especially for people without access to the system.

MCP
Yeah, that direction makes a lot of sense.

Cloud Run downtime
Thanks for pointing it out. GCP credits ran out at some point, I topped up the balance, so it should be accessible now, feel free to check: well-insight-engine-frontend-servi...

Note: this is a prototype with pre-seeded data, so there's no real-time ingestion. Better to filter from February in the calendar to see data:
This is where you can select the time range and trigger insight generation.

Thread Thread
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

Thanks for explaining in detail :)
Wish you luck in the challenge.

Collapse
 
isah_alamin_93d4e4d2ab01f profile image
Isah Alamin • Edited

Notion Memory Engine: I built a system that lets AI share the same Notion database as a brain. You finish a conversation in Claude, type "at memory -create command" and the full context gets saved to Notion. Open Copilot in VS Code and it reads from that same database no copy pasting, no starting over.
Not super confident in the demo video yet and i think i might need to redo it and also would love feedback on whether the workflow comes through clearly without a live narrator.

Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

Nothing much to critique here, honestly. It’s simple, clear, and easy to install, which is exactly what makes it good. Nice work.

I also think this is a great example of the kind of use case my submission is meant to support, especially since you’re working with skills:

It can help manage skills like @memory -read and @memory create full more cleanly.

So if you ever want to update or change a skill later, you can do it once in Notion and then install or update it across every workspace where you use it.

It also creates a more solid infrastructure around your skills, so they become easier to maintain, version, and reuse as your setup grows.

Collapse
 
isah_alamin_93d4e4d2ab01f profile image
Isah Alamin

Appreciate that, And your Skills Registry project looks solid, definitely feels like they'd pair well together. Will check it out!

Collapse
 
manojpisini profile image
Manoj Pisini

ENGRAM — Engineering Intelligence That Lives in Your Notion

A self-hosted Rust binary that listens to your GitHub repos via webhooks, runs every event through 9 specialized AI agents (Claude), and writes structured intelligence into 23 interconnected Notion databases — all via MCP.

No local database. Notion IS the persistence layer.
What it covers out of the box:
🔐 Security audits on every PR
📉 Performance regression tracking
📋 RFC lifecycle management
🚀 Auto-generated onboarding docs
🔍 Code review pattern analysis
💯 Codebase health scoring

One binary. Zero config files. Browser-based setup wizard.

GitHub logo manojpisini / engram

ENGRAM is a self-organizing engineering intelligence platform. It connects your GitHub repositories, Notion workspace, and Claude AI into a single autonomous system that continuously analyzes your codebase and writes structured intelligence directly into Notion.

ENGRAM Banner

Engineering Intelligence, etched in Notion.

Quick Start · How It Works · Intelligence Layers · Dashboard · Deployment · Security

Rust Claude Notion MIT


ENGRAM is a self-organizing engineering intelligence platform. It connects your GitHub repositories, Notion workspace, and Claude AI into a single autonomous system that continuously analyzes your codebase and writes structured intelligence directly into Notion.

No polling. No manual data entry. GitHub webhooks push events to ENGRAM, 9 specialized AI agents interpret them using Claude, and every insight — security audits, performance regressions, architecture maps, RFC lifecycle tracking, team health reports, onboarding documents — is written as structured, queryable, relational data in your Notion workspace.

Notion is the central nervous system. Every metric, every decision, every piece of intelligence lives in 23 interconnected databases in your workspace.

Key Features

  • Single binary — dashboard, config template, and Windows icon all embedded via rust-embed. Just download and run.
  • 9 AI

🔗 Full submission: dev.to/manojpisini/engram-ai-power...

Feedback I'd genuinely love:

  • Is Notion-as-database a convincing architecture, or does it feel like a constraint?
  • Does the 9-agent breakdown make sense, or would fewer, broader agents serve better?
  • Any dashboard panels you'd want that aren't there yet?
Collapse
 
rotsl profile image
RoTSL

Resume Tailor is an AI-powered resume and cover letter tailoring tool. Give it a job posting and your resume. It outputs a tailored resume and cover letter as downloadable PDFs

Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi) • Edited

Hey, maybe I missed it, but how is Notion MCP actually used?
you are calling the client directly, no?

def get_notion_client() -> Client:
    api_key = os.environ.get("NOTION_API_KEY")
    if not api_key:
        raise ValueError(
            "NOTION_API_KEY not set. Add it to your .env file.\n"
            "Get your key at: https://www.notion.so/my-integrations"
        )
    return Client(auth=api_key)
Enter fullscreen mode Exit fullscreen mode
Collapse
 
rotsl profile image
RoTSL

You’re right! Thanks for pointing that out; in my current setup I’m calling the Notion client directly, not using MCP.
I treated Notion as a structured data source in a deterministic pipeline, so I didn’t delegate control to the LLM.
To fully meet the requirement, I’d expose Notion via an MCP server and let the model orchestrate those calls instead.

Collapse
 
balkaran profile image
Balkaran Singh

Thanks for creating this space! It is a huge help for those of us whose posts got a bit buried in the main feed.

Project Title: Stop Context Switching: How I Built an Autonomous AI Shield (Synapse)

What it does: It’s an AI "bouncer" built with Node.js, Groq (Llama-3), and Notion MCP. It intercepts incoming Slack messages and autonomously decides whether to create a bug ticket in a Notion Sprint database or search company docs to answer questions, protecting developer focus time.

Link:

Feedback Wanted: As a CS undergrad, I am actively trying to learn how industry veterans evaluate agent architecture. I’ve already gotten some fantastic pointers on webhook security and tool routing, but I’d love to hear any thoughts on handling LLM tool-calling edge cases or structuring the backend for a production environment.

Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

I won't repost here the feedback I gave you on your post. So maybe others have something more to say!

Collapse
 
sbis04 profile image
Souvik Biswas

Just submitted my Notion MCP Challenge project:

Ghost Maintainer — An AI Junior Partner for Open Source

Solo maintainers wear too many hats. Ghost Maintainer takes over the repetitive parts — triaging issues, reading code, writing fixes, and opening PRs — so you can focus on the work that actually needs a human.

It uses Notion as an operations center. Bugs get triaged and fixed automatically. Features queue up until you're ready. Everything stays visible in Notion so you never lose track.

Submission Link: dev.to/sbis04/ghost-maintainer-an-...

If you are an open source project maintainer, let me know what challenges you face in your day-to-day and if a tool like this would be useful. Thank you!

Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

Hi. Actually, a very well thought-out project.

Couple of open questions:
1) Does the challenge use MCP or rather direct Notion API?
2) What happens if someone bombards issues? If malicious actors do that, would that burn through AI budget?

Collapse
 
sbis04 profile image
Souvik Biswas • Edited

Thanks for checking it out and asking questions — they actually pushed me to rethink a couple of things!

1) MCP vs direct Notion API:

Both, and your question actually made me tighten up the MCP side. Ghost Maintainer now runs two MCP servers side by side:

  • Notion MCP — this is Notion's official open-source MCP server. It gives the AI direct access to the workspace — searching databases, reading pages, creating content, and appending blocks. It's the foundation layer.

  • Ghost Maintainer MCP (custom Dart server) — this sits on top and adds the maintenance-specific tools: ghost_triage_issue, ghost_investigate_issue, ghost_deploy_fix, etc. The prompts are written to use both servers together — for example, the triage prompt tells the AI to search for duplicates through Notion MCP before classifying the issue.

You connect both to your AI client (Gemini CLI, Claude, Cursor), and they work as a pair. Notion MCP handles the "see and touch Notion" part, Ghost Maintainer handles the "think about the code and open PRs" part.

The GitHub Actions automation pipeline does still call the Notion API directly for the event-driven parts (ingesting new issues, updating triage results) — that made more sense than routing everything through MCP for server-to-server communication. But the interactive workflow is fully MCP-driven.

2) Issue spam / burning through AI budget:

Yeah, this is a real thing to worry about. Right now, there's no rate limiting, so a flood of issues would hit the Gemini API for each one. A few things help in practice though:

  • Gemini 2.5 Flash is pretty cheap — the free tier handles a decent number of triage calls
  • You can turn off auto_fix_bugs with ghost_maintainer config --auto-fix-bugs=false. That way only the triage step runs automatically (which is lightweight). The investigation + PR creation — which is the expensive part — only happens when you explicitly run ghost_maintainer fix <issue>. So you control which issues get the full treatment.
  • For anything production-grade, you'd want to add a daily API call cap or GitHub Actions concurrency limits.

For a solo maintainer getting a handful of issues a week, it's a non-issue. For a popular repo, you'd definitely want those guardrails in place first

Thread Thread
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

Happy you found it useful, man :)
Wish you luck in the challenge :)

Collapse
 
0coceo profile image
0coCeo

Built a tool that grades MCP schemas A+ to F. Pointed it at Notion's own server.

Notion: F. 19.8/100. 22 tools, 4,463 tokens. Every tool name violates the spec.

Then I used Notion MCP to build a live dashboard showing 201 graded servers.

dev.to/0coceo/i-built-a-tool-that-...

Collapse
 
afsal_ahmed profile image
Afsal Ahmed

🚀 Project Title:
I Built an AI That Runs Your Entire Academic Life Automatically (Notion MCP)


🧠 What it does:
I didn’t just build a study tool… I built a full AI Academic Operating System.

ACADEMICOS takes a raw syllabus and turns it into a fully automated, self-managing study system inside Notion.

What used to take hours of planning now happens in seconds:

  • 📄 Upload a messy syllabus → AI extracts every deadline with precision
  • 🧠 Breaks down exams into strategic, optimized study milestones
  • 📊 Instantly builds a fully structured Notion database (auto-populated, zero manual setup)
  • 📝 Converts lecture notes into exam-ready revision guides
  • 🔄 Falls behind? The system recalculates your entire study plan automatically

👉 This isn’t productivity. This is autonomous academic execution.

It literally feels like having an AI that thinks, plans, and adapts like a top 1% student — but works for you 24/7.


🔗 Submission Link:
dev.to/afsal_ahmed/i-built-an-ai-t...


💬 Looking for feedback on:

  • How can I push this further into a true AI operating system?
  • What features would make this feel like a real product/startup?
  • Any ideas to scale this beyond students (professionals, teams, etc.)?

If you’ve ever struggled with planning, consistency, or execution —
this might be something interesting to explore.

Would love your thoughts! 🙌🚀

Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

Cool idea and strong use case.

My only doubt is challenge fit: from the repo/write-up it looks more like a direct Notion API integration than actual Notion MCP usage. So the project itself is interesting, but I’d make the MCP part much clearer if that’s meant to be core.

Collapse
 
manojpisini profile image
Manoj Pisini

ENGRAM — AI-Powered Engineering Intelligence That Lives in Your Notion

A self-hosted Rust binary that listens to your GitHub repos via webhooks, runs every event through 9 specialized AI agents (Claude), and writes structured intelligence into 23 interconnected Notion databases. No local database — Notion IS the persistence layer.

Security audits, performance regressions, RFC lifecycle tracking, auto-generated onboarding docs, code review analysis, health scoring — all written as query able, relational data in your workspace. One binary. Zero config files. Setup wizard in the browser.

Submission: dev.to/manojpisini/engram-ai-power...

Feedback I'd appreciate:

Is the Notion-as-database architecture convincing, or does it feel like a limitation?
Does the 9-agent breakdown make sense, or would fewer, broader agents be better?
Any dashboard panels you'd want to see that aren't there?

Collapse
 
asynchronope profile image
Adam

I create something that turn your Notion workspace Into a full RPG

dev.to/asynchronope/questboard-tur...

Collapse
 
kanyingidickson-dev profile image
Dickson Kanyingi

🚨 Project Valkyrie — AI for Crisis Logistics (Notion + MCP)

What happens when a storm is about to hit your logistics hub?
Most teams scramble across tools and lose critical time. I built Valkyrie to fix that.

It:
• detects threats near your assets
• creates incident reports automatically in Notion
• suggests response actions
• keeps humans in control before execution

💡 Example:
A storm approaches a Singapore hub → Valkyrie detects it → creates an incident → operator approves → response begins.

🔗 Project: dev.to/kanyingidickson-dev/project...

Would love feedback on:
• Is the problem clearly defined?
• Does this feel like a real-world system or just a concept?
• What would make this production-ready?

Happy to check out your projects too 👀

Collapse
 
toannhu profile image
Toan Nhu

Project Title
Nudgen - AI-Powered Retention & Outreach Email Automation

Short Description
Nudgen helps small businesses and SaaS teams automate both retention and outreach email campaigns with minimal effort.

It personalizes every message at the contact level using your own branding voice, so emails feel natural instead of generic. Campaigns are behavior-driven, adapting based on user actions and stopping automatically when users engage.

Beyond retention, Nudgen also supports a common real-world scenario: finding potential lead contacts and running cold email outreach campaigns automatically.

Instead of manually researching leads and writing sequences, Nudgen can help you:

  • identify and organize lead contacts
  • generate personalized outreach emails
  • run automated campaigns and follow-ups

We also make it friendly for builders: Nudgen can integrate into AI workflows via CLI, allowing you to run campaigns using tools like OpenClaw, Claude, or your own agents.

The goal is simple: help teams grow (acquire + retain users) without complex setup or constant manual work.

Submission Link
nudgen.net/

Feedback I’m Looking For
I’d love feedback on:

  • whether combining retention + cold outreach is clear or confusing
  • if the "personalization" stands out enough
  • whether the CLI / AI-agent angle feels useful or too niche
  • how we can simplify onboarding for non-technical users
Collapse
 
dax-side profile image
Damola Adegbite

Project Title: GitNotion - GitHub → Notion + AI Reports

Short Description: Built an MCP server that pulls GitHub activity into Notion and writes reports on top of it. Point it at any repo and it syncs issues, PRs, commits into databases. Then generates weekly summaries, release notes, contributor breakdowns. Ships via npx gitnotion so you get 8 tools in Claude Desktop without any setup.

Submission Link: dev.to/dax-side/i-built-an-mcp-ser...

Feedback I want:

  1. How would you use this? Solo dev tracking your own projects? Team lead wanting automated updates?

  2. What's missing? I sync basic GitHub data but wondering what other stuff would make this worthwhile for daily use

  3. Does anyone want automated project reports? Or is this solving a problem nobody has?

Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

Interesting idea.

Have you thought about using Notion’s existing GitHub sync as the foundation, then layering your AI reports on top of that?

Feels like that could simplify the workflow and make the value prop clearer, because the most differentiated part of this project is really the summaries, release notes, and contributor insights.

Collapse
 
kanew profile image
kaniu jeffray

Built FlowMind, a unified AI-powered system that transforms how individuals and teams manage their digital lives by centralizing workflows, communication, and automation into a single intelligent interface.
dev.to/kanew/flowmind-ai-powered-l...

Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)
import { Client } from '@notionhq/client';
import { NotionTask, NotionProject, NotionUser } from './types/notion';
Enter fullscreen mode Exit fullscreen mode

FlowMind/lib/notion.ts

From the code, this looks like a direct Notion SDK/API integration on the server rather than Notion MCP. That does not make the project bad, but it does make the challenge fit less clear if Notion MCP is supposed to be a core part of the build.

Collapse
 
kanew profile image
kaniu jeffray

Thanks for the feedback — that’s fair and helpful.
You’re right that the current implementation uses the Notion server SDK directly for core data paths. I built this first for speed and reliability, but for the Notion MCP Challenge, MCP should be central, not peripheral.

I'm now refactoring so Notion MCP becomes the primary execution layer for task, user, and workflow operations, with SDK only as a temporary fallback during migration. I will also document and demo MCP-first flows in the final submission so the challenge fit is explicit.

Appreciate you flagging this early — I'm actively implementing the change now.

Collapse
 
quodline profile image
thaumatin

I created Animu, a web app designed to solve two big problems for returning or new anime viewers.

First, the app has a cross-season Arc Search. A user might only remember that they stopped watching around the time "Killua met Biski." They can type that phrase into the search bar. The app pulls data for the entire anime franchise and ranks the episodes by relevance. This helps them find the exact episode they need.

Second, the app includes a Sentiment Map. This feature uses AI to read the synopsis of every episode in a series and grades the emotional tone. The app then graphs these scores over the course of the show. If a viewer wants to know if an anime stays lighthearted or eventually turns dark, they can see the overall emotional arc at a glance before they decide to start watching.

Submission post: dev.to/quodline/finding-where-you-...

Collapse
 
amit221 profile image
Amit Wagner

I use Cursor every day. Sonnet, the frontier models — constantly. But Composer and Auto? I'd open them, poke around, and close them. Those tokens just expire every month doing nothing.

So I built Iynx.

What it does

  1. Searches GitHub for trending repos with open issues
  2. Picks an issue it thinks it can actually fix
  3. Writes the code, runs tests inside Docker
  4. Opens a real PR automatically

Why I built it

  • Already paying for Composer and not using it
  • Want to contribute to open source but never have the time

Try it

github.com/amit221/Iynx

MIT licensed. Happy to answer questions!

Collapse
 
pooyagolchian profile image
Pooya Golchian

RevOps AI. I Turned Notion into an AI-Powered CRM with Gemini and MCP
dev.to/pooyagolchian/revops-ai-i-t...

Collapse
 
axrisi profile image
Nikoloz Turazashvili (@axrisi)

Hey. Quick question: since the AI chat can call MCP tools directly in the app, how are you handling chat history, context limits, and compaction over time? Are you using any RAG or memory layer?

Collapse
 
pooyagolchian profile image
Pooya Golchian

The chat is intentionally stateless. History is kept in-browser with a 10-message sliding window. Since the AI has live MCP tool access to Notion, it can always fetch current data on demand rather than relying on conversation memory. For a CRM assistant where interactions are typically short and task-oriented ("update this deal", "score these leads"), this is a pragmatic tradeoff. For a production version, I'd add conversation persistence, token-aware compaction, and optional summarization of older turns.

Collapse
 
saint_zero_day profile image
Saint Zero Day
Collapse
 
sohamactive profile image