My System Notes project started as a DEV Challenge and turned into a three-part systems experiment. Like most of my projects, it didn’t stay small.
It started as a simple idea: let the system capture engineering decisions as they happen and make them easy to reference later. Mostly as a future-me record of “what was I thinking?” for any given build.
You can read through my progression of thoughts across these challenge submissions, if you're curious:
- My Portfolio Doesn’t Live on the Page
- From Static Portfolio to Indexed Decisions
- Conversational Retrieval: When Chat Becomes Navigation
The portfolio site does more than just display indexed decisions. It serves as my AI playground for pushing systems behind the scenes, just to see what happens. Over the last few weeks, that playground exposed a very boring problem. The exact kind that quietly slows everything down:
✋ Someone still has to write artifacts into the system—a less than thrilling, highly repetitive job I never actually wanted.
The Bottleneck I Accidentally Built ⚙️
The thinking process for the System Notes index already looks like this:
idea
↓
conversation with AI
↓
decision
Most of the reasoning happens in that conversation. ChatGPT helps challenge ideas, organize the thinking, and refine the direction. Turning those decisions into indexed artifacts required an extra step, and it got worse after I migrated from JSON to Supabase.
Originally, I handled it all manually but that got tiresome quickly. So, I let AI identify and summarize decisions that were made at the end of a session. From there I’d copy, paste, edit, and insert the record.
Later, I gave ChatGPT strict artifact instructions to format the output as a SQL insert. That removed one step and technically worked. In practice, not so much.
It was far from a perfect system and was often buggy. Even worse—it still required me to context switch, copy the output, paste it into a query, and fix whatever the AI inevitably messed up along the way.
So before tinkering too much, the second half of my workflow looked like this:
decision
↓
AI generates SQL
↓
copy
↓
paste
↓
I fix SQL
↓
insert
Which is not exactly the frictionless system I had in mind…
Supascribe: Letting AI Write Data Artifacts 🏗️
Since AI was already doing most of the heavy lifting, I saw no reason not to remove several of those steps with a little upfront structure. So, I wrote Supascribe—a small devtool designed to remove the manual translation layer eating into my build time.
Supascribe does one unconventional thing: allow the AI collaborator to write directly to the database, with a human-in-the-loop review step.
Risky? Probably. Uncontrolled? No.
The pipeline now looks like this:
AI collaboration
↓
artifact proposal
↓
human review
↓
schema check
↓
database insert
ChatGPT drafts the artifact from the conversation history, and after I approve it, the tool writes it to Supabase.
The goal is simple: shorten the distance between thinking about a decision and capturing it in the system.
Right now the tool is intentionally minimal. It does exactly three things:
- Accept structured artifact input from ChatGPT
- Check all required fields with a strict Zod schema
- Write the artifact into the database
That’s it—there's no magic yet. Just structured input, a schema check, and a controlled insert. As it turns out, that was enough to remove the SQL-copying circus from my workflow.
Where The System Still Slows Me Down 🚧
The biggest problem is this isn't exactly the foolproof solution I first envisioned and it still relies heavily on my Approve/Deny button to maintain data integrity. AI is allowed to propose artifacts and insert them, but only after I allow it—which isn't what I wanted, but absolutely necessary for version one.
The integrity of the index is protected, but the system doesn't eliminate the human bottleneck yet. Right now Supascribe shortens the path between conversation and artifact, but it doesn’t fully automate it.
This system accelerates thinking, not decision authority.
And that’s intentional. Letting AI write at-will into your data layer without strict guardrails is a great way to accidentally invent a brand new genre of data corruption. 😕
Teaching AI To Touch Data Safely 🦾
The next phase of this experiment is testing how much autonomy the AI collaborator can safely gain.
That likely means stronger guardrails in two immediate places:
- The backend can enforce stricter validation around artifact structure and write behavior.
- The AI can perform structured validation before proposing artifacts.
The next goal is to make the workflow resilient enough for AI to safely participate in knowledge capture, not just idea generation. Right now the system is cautious by design, but I do want to gradually increase its autonomy and see how well data integrity holds over time.
What started as documentation automation is turning into something bigger: testing how much responsibility an AI collaborator can safely hold.
The Real Question Behind This 🌀
My System Notes portfolio started as a simple portfolio experiment. Supascribe turned it into a systems experiment.
Now I'm testing how well AI acts as a participant in the artifact creation layer of a knowledge system.
Not just generating text or ideas, but using its own memory and strict guidelines to identify which decisions should become part of the underlying system.
Admittedly, that’s a much more dangerous layer for AI to operate in. And sounds like fun to me.
Most AI tooling stays safely away from the data layer of any system. It's allowed to draft, suggest, summarize, and code. However, Supascribe goes one step further and asks:
What happens if the AI helps write the system itself?
Yes—I’m aware this could explode in very entertaining ways. That’s kind of the point. 🌀
I started this experiment trying to remove friction from documentation.
What I’m actually testing is whether AI can safely participate in the systems that decide what gets remembered and what gets trusted.
🛡️ The System Didn’t Write This Alone
This post was written by me, with ChatGPT acting as a thinking partner while refining structure and clarity. The decisions, experiments, and system design are mine. ChatGPT helped challenge wording and tighten the narrative.
AI wants you to know that it performed no database writes during the editing of this post. That seemed wise.


Top comments (0)