<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Souren Ghosh</title>
    <description>The latest articles on DEV Community by Souren Ghosh (@soureng).</description>
    <link>https://dev.to/soureng</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/soureng"/>
    <language>en</language>
    <item>
      <title>I Built a Full-Stack AI Second Brain App Without Writing a Single Line of Backend Code</title>
      <dc:creator>Souren Ghosh</dc:creator>
      <pubDate>Thu, 14 May 2026 10:56:54 +0000</pubDate>
      <link>https://dev.to/soureng/i-built-a-full-stack-ai-second-brain-app-without-writing-a-single-line-of-backend-code-14oe</link>
      <guid>https://dev.to/soureng/i-built-a-full-stack-ai-second-brain-app-without-writing-a-single-line-of-backend-code-14oe</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;PDF OCR, semantic mind maps, RAG search, and a hierarchical tag system — all generated through conversations with an AI app builder.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;I want to tell you about the most productive few days of building I have ever had.&lt;/p&gt;

&lt;p&gt;Not because I wrote exceptional code. But because I barely wrote any backend code at all.&lt;/p&gt;

&lt;p&gt;I built &lt;strong&gt;Neuron&lt;/strong&gt; — a full-stack personal knowledge management app with AI summarisation, PDF text extraction via OCR, three-mode interactive mind maps, a plain English query interface, and a Studio document editor — using &lt;a href="https://medo.dev" rel="noopener noreferrer"&gt;MeDo&lt;/a&gt;, a no-code AI app builder, as my primary development tool.&lt;/p&gt;

&lt;p&gt;This is the honest story of how that went: what worked brilliantly, what broke repeatedly, and what I learned about describing software to an AI well enough that it builds the right thing.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem I Was Solving
&lt;/h2&gt;

&lt;p&gt;Most of us consume hundreds of articles, videos, and ideas every week and forget almost all of them.&lt;/p&gt;

&lt;p&gt;Traditional note apps — Notion, Obsidian, Evernote — store information well. But they require manual effort. You paste a link, you write the summary, you add the tags, you find the connections. The app is a filing cabinet. You are the librarian.&lt;/p&gt;

&lt;p&gt;I wanted to build something where the AI does the librarian work. Capture anything, and the system automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Summarises it in three sentences&lt;/li&gt;
&lt;li&gt;Tags it and places it in a topic hierarchy&lt;/li&gt;
&lt;li&gt;Finds semantic connections to everything else you have saved&lt;/li&gt;
&lt;li&gt;Lets you query your entire knowledge base in plain English&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result is Neuron. Here is what it ended up containing.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Got Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Five screens, fully functional:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Capture&lt;/strong&gt; — paste any URL, raw text, or upload a PDF. AI processes everything automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Library&lt;/strong&gt; — flat grid view or date-based folder tree, with a hierarchical tag sidebar for filtering.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mind Map&lt;/strong&gt; — three zoomable drill-down modes (Time, Topic, Network) with level-of-detail unfolding.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ask&lt;/strong&gt; — plain English RAG search across your entire knowledge base with cited answers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Studio&lt;/strong&gt; — rich text document editor with inline drawing canvas, autosave, and publish flow.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Four external API integrations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Jina Reader for clean URL content extraction&lt;/li&gt;
&lt;li&gt;Baidu AI Studio PaddleOCR for PDF text extraction via a server-side proxy&lt;/li&gt;
&lt;li&gt;Supabase for the complete backend (database, auth, edge functions)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;A tag hierarchy system&lt;/strong&gt; where tags form a parent-child tree four levels deep, with eight permanent root categories (technology, science, business, creativity, health, philosophy, history, other) that cannot be deleted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User authentication&lt;/strong&gt; with anonymous 24-hour sessions and full account registration, with complete data isolation between users and sessions.&lt;/p&gt;




&lt;h2&gt;
  
  
  How I Actually Built It: Describing Software to an AI
&lt;/h2&gt;

&lt;p&gt;The first and most important thing I learned is that MeDo is not a command executor. It is a generative app builder. The difference matters enormously.&lt;/p&gt;

&lt;p&gt;If you tell it "create a database with these fields," it responds politely saying it cannot execute backend commands. If you tell it "build an app where users paste URLs and the app scrapes the content, summarises it with AI, and saves it to a library," it builds the entire thing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The mental model shift:&lt;/strong&gt; describe what users experience, not what the system does internally.&lt;/p&gt;

&lt;p&gt;Instead of: &lt;em&gt;"Create a notes table with fields: id, title, raw_content, summary, source_type, tags, topic_category, owner_id, created_at"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Say: &lt;em&gt;"Build an app where users paste any URL. The app fetches the full page content, uses AI to generate a 3-sentence summary, assigns topic tags, and saves everything to a library the user can browse."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;MeDo handles the schema, the database, the API routes, and the frontend — all from the second description.&lt;/p&gt;

&lt;h3&gt;
  
  
  My Prompting Structure
&lt;/h3&gt;

&lt;p&gt;I built Neuron in five distinct conversation phases:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 1 — Master prompt.&lt;/strong&gt; One large prompt describing the entire app: all five screens, what users do on each, what the AI does, what data gets stored, and all four source types. This gave MeDo the full picture before generating anything, producing a coherent skeleton rather than disconnected fragments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 2 — Feature depth.&lt;/strong&gt; One feature per conversation turn. I described UI in exhaustive detail — exact pixel sizes, colours, hover states, animations, error cases, empty states. MeDo generates better code when it can visualise the precise output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 3 — API integrations.&lt;/strong&gt; Each external service introduced as a user flow with the exact API endpoint and response structure included in the prompt. For the OCR proxy I described the CORS problem explicitly: "the browser cannot call this URL directly, so we need the backend to make the call instead." MeDo generated the proxy route, the frontend fetch, and all error handling from that one sentence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 4 — Debugging.&lt;/strong&gt; When something broke silently, I asked MeDo to add visible debug logging panels in the UI rather than guessing at fixes. Reading actual error messages produced precise fix prompts rather than regenerating large sections.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 5 — Polish.&lt;/strong&gt; Visual redesign described in CSS tokens — exact colour values, spacing, shadow definitions, transition timings. This let MeDo apply comprehensive visual changes without touching logic.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Feature That Impressed Me Most: The Mind Map
&lt;/h2&gt;

&lt;p&gt;I was most sceptical about the mind map. This is genuinely complex visualisation work — SVG rendering, zoom and pan mechanics, force-directed layouts, level-of-detail algorithms. Normally this is the kind of thing that requires a specialist engineer and several weeks.&lt;/p&gt;

&lt;p&gt;I described three completely different visualisation modes in a single prompt:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Time Drill&lt;/strong&gt; — knowledge unfolds as a timeline. Start at the year level, zoom in to months, then days, then hours, then individual notes. Each zoom level reveals children around their parent. The parent stays visible at 30% opacity as a ghost anchor — so you never lose context of where you are in the hierarchy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Topic Drill&lt;/strong&gt; — knowledge organised by subject. Topic category hexagons at the top level, clicking one reveals its root tags as pill nodes, drilling further reveals subtags, then individual notes. The tag hierarchy from the database directly powers the drill-down structure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network mode&lt;/strong&gt; — an orbital layout showing what is at the centre of your knowledge. The most-connected note becomes the hub. Its direct connections orbit it in the first ring at 220px radius. Second-degree connections form an outer ring at 420px. Isolated notes sit at the periphery. Faint concentric guide circles (like an orrery) make the ring structure immediately legible.&lt;/p&gt;

&lt;p&gt;MeDo generated working SVG-based visualisations from these descriptions. The ghost parent mechanic — where drilling into a year node leaves a dim outline of the year behind as spatial context — came from one paragraph describing the desired user experience. The bezier connection lines that animate in using SVG stroke-dashoffset came from a single sentence specifying the animation.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Integration That Required the Most Problem-Solving: PDF OCR
&lt;/h2&gt;

&lt;p&gt;The PDF OCR pipeline was the most technically interesting challenge.&lt;/p&gt;

&lt;p&gt;I wanted to use PaddleOCR from Baidu AI Studio — it is excellent for document text extraction. The API is straightforward: send a base64-encoded PDF, receive an array of recognised text strings.&lt;/p&gt;

&lt;p&gt;The problem: &lt;strong&gt;CORS&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Browsers block direct cross-origin requests to external APIs. The fetch would fire, and immediately fail with &lt;code&gt;TypeError: Failed to fetch&lt;/code&gt; — the browser refusing to complete the request before it even reached the server.&lt;/p&gt;

&lt;p&gt;The solution is a server-side proxy. The browser calls your own backend, your backend calls the external API server-to-server where CORS does not apply, and your backend forwards the response back. Simple in principle. Previously required writing actual server code.&lt;/p&gt;

&lt;p&gt;I described the problem to MeDo in plain language:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"The OCR API call is being blocked by CORS — the browser cannot call this endpoint directly. Fix this by creating a server-side proxy route &lt;code&gt;/api/proxy/ocr&lt;/code&gt; that receives the request from the frontend, forwards it to the Baidu endpoint server-to-server, and returns the response."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;MeDo generated the proxy route, updated the frontend fetch to target it, and handled upstream failure cases with a 502 error response. The fix worked on the first attempt.&lt;/p&gt;

&lt;p&gt;The response parsing was also non-trivial. The PaddleOCR response nests extracted text inside:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ocrResults&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;prunedResult&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rec_texts&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A multi-level nested array structure. I described the exact path to MeDo and it generated correct parsing code that joins all &lt;code&gt;rec_texts&lt;/code&gt; arrays across all pages into a single clean text string for AI summarisation.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Broke (Honest Account)
&lt;/h2&gt;

&lt;p&gt;I am not going to pretend this was frictionless. Here is what broke and why:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tag hierarchy duplicates.&lt;/strong&gt; The tag system creates parent-child tag relationships automatically. The &lt;code&gt;findOrCreateTag&lt;/code&gt; function was matching tags on name only, not on &lt;code&gt;(name, parent_id)&lt;/code&gt; together. This created duplicate &lt;code&gt;history&lt;/code&gt; tags — one root-level, one nested under &lt;code&gt;other&lt;/code&gt;. Fix: always match on both name AND parent_id simultaneously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The save button going silent.&lt;/strong&gt; After adding the tag hierarchy system, the Save button stopped doing anything. No errors, no network calls, just silence. Root cause: a Promise chain was returning early when tag resolution threw an exception, and the error was being swallowed. Fix: wrap tag resolution in try/catch with &lt;code&gt;tags = []&lt;/code&gt; fallback — tags failing must never prevent a note from saving.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anonymous session UUIDs.&lt;/strong&gt; The database &lt;code&gt;owner_id&lt;/code&gt; column was typed as UUID. Anonymous sessions were generating IDs like &lt;code&gt;anon-17780529566&lt;/code&gt; — valid strings, invalid UUIDs. Every anonymous save failed silently. Fix: generate proper UUID v4 strings for anonymous sessions using the standard hex replacement algorithm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OCR token management.&lt;/strong&gt; The initial design had separate API Key and Secret Key fields for OAuth flow. The actual Baidu AI Studio app uses a single bearer token with no OAuth. Prompted MeDo to simplify the key management to a single token field once I had the real API documentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CORS on OCR (described above).&lt;/strong&gt; Not a MeDo failure — a fundamental browser security constraint. Solved with the proxy architecture.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Tag Hierarchy System
&lt;/h2&gt;

&lt;p&gt;This deserves its own section because it was the most architecturally interesting piece.&lt;/p&gt;

&lt;p&gt;Tags in Neuron are not a flat list. They form a tree:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;technology (root — permanent)
  └── artificial-intelligence (domain)
        └── neural-networks (specific)
              └── backpropagation (detail)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Eight root categories are permanent and cannot be deleted. Every other tag must have a parent. Maximum depth of four levels.&lt;/p&gt;

&lt;p&gt;When AI generates tags for a new note, it returns a hierarchy array:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"tag"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"technology"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"parent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"tag"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"artificial-intelligence"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"parent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"technology"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"tag"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"neural-networks"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"parent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"artificial-intelligence"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"tag"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"backpropagation"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"parent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"neural-networks"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Only the leaf tags (those not appearing as a parent of any other tag in the array) are saved to the note. Parent tags are implied through ancestry.&lt;/p&gt;

&lt;p&gt;This structure powers the Topic Drill mind map — each drill level corresponds to one depth level of the tag tree. It also powers the Library tag filter panel, where selecting a parent tag automatically includes all descendant notes.&lt;/p&gt;

&lt;p&gt;Getting MeDo to implement this correctly required the most precise prompting in the entire project. The key insight: describe the data structure as a user experience, not a schema. "Tags form a hierarchy where clicking a parent tag in the filter shows all notes with that tag or any of its child tags" produced correct implementation. "Create a self-referential foreign key on the tags table" produced a schema confirmation with no UI.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Would Do Differently
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Start with the data model described as user flows.&lt;/strong&gt; The earliest version of Neuron had a flat tag array on notes. Retrofitting a hierarchy system onto an existing schema caused most of the bugs. Starting with "tags can have child tags, and a note can be tagged at any level of the hierarchy" in the initial master prompt would have generated the right structure from day one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One edge case per prompt.&lt;/strong&gt; When I described multiple fix scenarios in one message, MeDo sometimes fixed one and introduced a regression in another. The most reliable workflow: one specific problem, one focused description, one test before the next prompt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Describe empty states and error states upfront.&lt;/strong&gt; The initial prompts focused on the happy path. Adding empty states, loading states, and error states as afterthoughts required going back to every screen. Describing them in the initial prompt would have been more efficient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Debug logging panels are incredibly useful.&lt;/strong&gt; Adding a visible in-app debug log to the Capture screen — showing each step of the processing pipeline with timestamps — was the single best debugging decision I made. Being able to see "OCR returned 4 pages, 183 text blocks, first 120 chars: Nathan Lerner..." in the UI while testing was worth more than any amount of console.log hunting in DevTools.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Honest Answer to "Can No-Code Build Real Apps?"
&lt;/h2&gt;

&lt;p&gt;Yes. With caveats.&lt;/p&gt;

&lt;p&gt;The limiting factor was never MeDo's capability. Every feature I described carefully enough was generated correctly. The features that broke were the ones I described vaguely, or where I described multiple things at once, or where I used technical terminology (schema, endpoint, foreign key) instead of user-experience language.&lt;/p&gt;

&lt;p&gt;The skill required is not programming. It is &lt;strong&gt;product thinking expressed precisely in natural language&lt;/strong&gt;. Knowing what you want the user to experience. Knowing the edge cases. Knowing what happens when things go wrong. Being able to describe a user interaction from first click to final state without ambiguity.&lt;/p&gt;

&lt;p&gt;That skill is arguably harder than writing the code itself, because it requires understanding both the product and the engineering constraints well enough to describe them accurately to someone — or something — that will implement exactly what you say.&lt;/p&gt;

&lt;p&gt;Neuron is a genuinely useful application that I am continuing to develop. It was built in days. It has a full backend, real authentication, external API integrations, a complex data model, and three different interactive visualisation modes.&lt;/p&gt;

&lt;p&gt;That is what MeDo makes possible when you learn to describe software well.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It / Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Neuron app:&lt;/strong&gt; &lt;a href="https://app-aw8hrevmfcox.appmedo.com/" rel="noopener noreferrer"&gt;https://app-aw8hrevmfcox.appmedo.com/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MeDo platform:&lt;/strong&gt; &lt;a href="https://medo.dev" rel="noopener noreferrer"&gt;https://medo.dev&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Demo video:&lt;/strong&gt; &lt;a href="https://www.youtube.com/watch?v=PtNkFK_Q7W4" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=PtNkFK_Q7W4&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hackathon submission:&lt;/strong&gt; &lt;a href="https://devpost.com/software/mindweave-f2wiv4" rel="noopener noreferrer"&gt;https://devpost.com/software/mindweave-f2wiv4&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;*Built for the MeDo Hackathon 2026&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Tags:&lt;/strong&gt; &lt;code&gt;BuiltWithMeDo&lt;/code&gt; &lt;code&gt;nocode&lt;/code&gt; &lt;code&gt;ai&lt;/code&gt; &lt;code&gt;buildinpublic&lt;/code&gt; &lt;code&gt;productivity&lt;/code&gt; &lt;code&gt;javascript&lt;/code&gt; &lt;code&gt;react&lt;/code&gt; &lt;code&gt;supabase&lt;/code&gt; &lt;code&gt;hackathon&lt;/code&gt; &lt;code&gt;webdev&lt;/code&gt; &lt;code&gt;showdev&lt;/code&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>builtwithmedo</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
