Forem

Cover image for The Site Is the Proof
Peter Salvato
Peter Salvato

Posted on • Originally published at petersalvato.com

The Site Is the Proof

In March 2026, I asked Gemini to review petersalvato.com. I provided no prior knowledge, no context about who I am, and no explanation of how the site was built. I wanted a blind evaluation of the voice, the structure, and the "humanity" of the work. The evaluator spent time with the pages and then delivered its verdict. It praised the "anti-slop" quality of the writing. It noted the idiosyncratic taxonomy as a sign of a specific mental model. It identified what it called "pragmatic cynicism" and "contextual asymmetry" as clear markers of a human author who had actually lived through the projects described. The conclusion was definitive: the site was "unequivocally" human-derived. The only way AI could have been involved, the evaluator noted, was if someone had used an LLM to tighten up existing, very strong human drafts.

Then I told it the truth. Every page on my site was compiled by the system described on the site itself. The content was never hand-drafted and polished with AI. The methodology I've spent three years building produced every sentence, every structural decision, every project description. The evaluator's response shifted: "You haven't just built a website: you've built a self-documenting compiler for identity."


The source material for this compiler isn't a set of drafts. It's a raw corpus of my own thinking. Between January 2023 and early 2026, I accumulated thousands of conversations across ChatGPT, Claude Code, Gemini, and Claude.ai. Three years of thinking out loud: arguing with tools, working through complex architectural problems, explaining things to myself, and failing at things until they finally worked. This material is raw, unpolished, and entirely conversational. It was never meant to be read directly. It was meant to be compiled.

I use AI as a refinery, not a generator. My conversations are the data. The system I built mines that data, evaluates it against a set of rigorous standards, and compiles it into the result you see here.


That corpus exists because the system was designed to accept raw thinking. In FormWork, the first accommodation is aimed at the human: get the idea out of your head with as little friction as possible. Talk, dictate, answer questions. No requirement to organize or perform. The material stays conversational, and that's the point. It carries my actual voice, my actual sentence rhythms, the imagery I reach for when I'm thinking, not writing. The tools that follow can only preserve what the source material already contains. If the input had been structured, polished, performed, the output would sound like everyone. The rawness is what carries the voice through.

The pipeline that operates on this source material is four tools, each one built to solve a specific gap between human thought and machine output.

First, a knowledge skill traverses the full corpus. It doesn't matter if the data is a JSON export from ChatGPT or a Markdown log from Claude Code; the skill identifies "real moments." It looks for what actually happened, what I actually said in the heat of a project, and what decisions were actually made. This is the foundation of the site's fidelity. The system is specifically prevented from inventing anything. Every claim traces to a verified source in the corpus, or it gets cut. (I Needed a Better Tool)

Second, a voice protocol makes sure the output matches how I actually talk. Most people write for publication by performing a version of themselves. They use "furthermore" and "moreover"; they "delve" into "vibrant tapestries." In my conversations, I don't talk like that. I'm matter-of-fact, occasionally cynical, and focused on specifics. The voice protocol uses a 12-item checklist to catch AI writing patterns, marketing language, and performed formality. It extracts the voice from my unguarded sessions and applies it to the compiled output. (Voice Governance)

Third, the work is evaluated through multiple lenses extracted from real practitioners. These aren't "act as a designer" caricatures. They are codified evaluative frameworks built by studying the actual output and decision-making patterns of experts. We extract the questions these practitioners consistently ask of a piece of work and validate those extractions against work they actually produced. By running multiple lenses against a single dimension of the work, we surface tensions. Where the lenses agree, we have a strong signal. Where they disagree, I make the choice. (Lens Extraction)

Finally, a coordinator dispatches the entire process in parallel. Structural lenses, narrative lenses, voice checks, and baseline verifications all run at once. This is where the governance happens. If the structural layer says the engineering is sound but the narrative layer says the identity of the project is buried, the system doesn't smooth it over. It surfaces the conflict. The accumulated decisions I make to resolve those tensions are the work. (The Integrated System)


The thing that surprised me is how invisible it becomes when it works. The output is so consistent with my actual voice, so free of machine artifacts, and so grounded in specific details that a sophisticated AI evaluator concluded it must have been hand-written. The governance worked well enough that a reader would never guess the machinery was there.

The site presents results that look hand-crafted. The machinery that produced them is described on every page, but because it's so effective at removing its own fingerprints, the reader assumes the human did the manual labor of writing. I'm still not sure how I feel about that, honestly. The whole point is that the system compiled my thinking, and the fact that it worked means nobody sees the system at all.


The FormWork page describes the coordination harness and its tools. Those same tools compiled that page. The voice protocol was verified against samples extracted from my own conversations. Savepoint Syntax exists because savepoints marked the cognitive turns during its own construction. And This Site describes the build process that produced it. The argument for the system is the output you've been reading, which was compiled by the process it describes.


The question for the next few years isn't whether AI can produce good work. It clearly can. The question is whether it can produce your work: output that a blind evaluator cannot distinguish from your best hand-written thinking. That level of fidelity is only possible when the constraints are yours, the source material is yours, and the governance is yours.

This system took longer than writing the site by hand would have. The goal was always a tight loop between the source and the output: every sentence traces back to something real, and the methodology is proven by its own product.

The evaluator called the voice "High-Taste Human." The system produced something a machine couldn't identify as machine-produced, because the machine wasn't the author. The source material was three years of me thinking out loud, and the system was built to make sure the thinking survived the compilation process. I think it did. But I'm also aware that the better the system works, the harder it is to see the system working, and I haven't fully sorted out what that means yet.

Top comments (0)