<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Peter Salvato</title>
    <description>The latest articles on DEV Community by Peter Salvato (@petersalvato).</description>
    <link>https://dev.to/petersalvato</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/petersalvato"/>
    <language>en</language>
    <item>
      <title>I'm Compilative, Not Generative</title>
      <dc:creator>Peter Salvato</dc:creator>
      <pubDate>Tue, 24 Mar 2026 14:50:53 +0000</pubDate>
      <link>https://dev.to/petersalvato/im-compilative-not-generative-11gn</link>
      <guid>https://dev.to/petersalvato/im-compilative-not-generative-11gn</guid>
      <description>&lt;p&gt;Most people use AI as a generator. They prompt, they get content, they edit the content down. The AI originates the material and the human shapes it after the fact.&lt;/p&gt;

&lt;p&gt;I use it as a compiler.&lt;/p&gt;

&lt;p&gt;The distinction matters because it determines who actually wrote the thing.&lt;/p&gt;




&lt;p&gt;A compiler takes source code a human wrote and transforms it into something a machine can execute. The programmer writes the program. The compiler transforms it. Nobody credits gcc with authorship.&lt;/p&gt;

&lt;p&gt;My source code is the accumulated working knowledge: decisions made on construction sites, in print shops, across thirteen years on an enterprise platform, inside classrooms, over kitchen counters. Three years of that thinking lives in a massive corpus of conversations: sessions where I argued with tools, worked through problems, explained things to myself, failed at things until they worked.&lt;/p&gt;

&lt;p&gt;That's the codebase. It's raw, unpolished, and entirely mine. It exists because the system was designed to accept it raw. In &lt;a href="https://petersalvato.com/systems/formwork/" rel="noopener noreferrer"&gt;FormWork&lt;/a&gt;, the first accommodation is aimed at the human: get the thinking out of your head with as little friction as possible. Talk, dictate, answer questions. Don't organize, don't outline, don't perform. The rawness is the point. Structured input has already lost the thing the tools need most, which is how I actually think.&lt;/p&gt;

&lt;p&gt;The system (the &lt;a href="https://petersalvato.com/practice/this-site/" rel="noopener noreferrer"&gt;voice protocol&lt;/a&gt;, the &lt;a href="https://petersalvato.com/systems/lensarray/" rel="noopener noreferrer"&gt;evaluation lenses&lt;/a&gt;, the knowledge skill that traverses my ideation history) is the compiler passes. They mine the source material, evaluate it against criteria I set, and assemble it into output that sounds like me because the source material &lt;em&gt;is&lt;/em&gt; me. The site is what comes out the other end, and I'm still refining how that pipeline works.&lt;/p&gt;




&lt;p&gt;Generation starts from a prompt and produces something new. The AI draws on its training data and constructs output. The human's contribution is the instruction. Everything else comes from the model.&lt;/p&gt;

&lt;p&gt;Compilation starts from existing source material and transforms it. The source material only exists because the human was accommodated: no requirement to structure, organize, or perform. The human's contribution is the thinking itself, captured in their actual voice. The system's contribution is the transformation: mining, evaluating, assembling. It invents nothing. If a claim can't be traced back to something I actually said or decided in the raw material, it doesn't ship.&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://petersalvato.com/essays/the-site-is-the-proof/" rel="noopener noreferrer"&gt;blind evaluator&lt;/a&gt; read &lt;a href="https://petersalvato.com/" rel="noopener noreferrer"&gt;my site&lt;/a&gt; and could not identify machine involvement. The voice came through because the raw material was mine, and the system was built to preserve it rather than replace it.&lt;/p&gt;




&lt;p&gt;This distinction matters for one reason: authorship.&lt;/p&gt;

&lt;p&gt;If you use AI as a generator, the AI is the author and you're the editor. Your contribution is taste, choosing which outputs to keep. That's real work, but you're selecting from material someone else originated.&lt;/p&gt;

&lt;p&gt;If you use AI as a compiler, you are the author. Your accumulated practice, your specific decisions, your voice as it actually sounds in unguarded conversation. That's the source code. The system transforms it but doesn't originate any of it.&lt;/p&gt;

&lt;p&gt;I built the &lt;a href="https://petersalvato.com/systems/" rel="noopener noreferrer"&gt;governance layer&lt;/a&gt; to make sure that line stays clear. The &lt;a href="https://petersalvato.com/systems/savepoint/" rel="noopener noreferrer"&gt;Savepoint Syntax&lt;/a&gt; preserves my cognitive turning points so the system can't drift from where my thinking actually went. &lt;a href="https://petersalvato.com/systems/formwork/" rel="noopener noreferrer"&gt;FormWork&lt;/a&gt; coordinates the tools that shape the work: SavePoint preserves decisions so no session contradicts what's already been settled, &lt;a href="https://petersalvato.com/systems/lensarray/" rel="noopener noreferrer"&gt;LensArray&lt;/a&gt; evaluates across independent dimensions. The voice protocol catches the moment the output stops sounding like me and starts sounding like a machine performing me.&lt;/p&gt;

&lt;p&gt;The whole infrastructure exists to keep that line clear, and honestly it takes more effort to maintain than I expected when I started building it.&lt;/p&gt;




&lt;p&gt;Most people use AI to save time. This system actually took longer than writing the site by hand would have. The goal was always &lt;a href="https://petersalvato.com/vocabulary/fidelity/" rel="noopener noreferrer"&gt;fidelity&lt;/a&gt;: keeping the output locked to the source material so that every sentence traces back to something I actually said or decided. The authorship stays mine because the thinking was mine first.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>writing</category>
      <category>programming</category>
      <category>creativity</category>
    </item>
    <item>
      <title>The Site Is the Proof</title>
      <dc:creator>Peter Salvato</dc:creator>
      <pubDate>Tue, 24 Mar 2026 14:50:32 +0000</pubDate>
      <link>https://dev.to/petersalvato/the-site-is-the-proof-534i</link>
      <guid>https://dev.to/petersalvato/the-site-is-the-proof-534i</guid>
      <description>&lt;p&gt;In March 2026, I asked Gemini to review petersalvato.com. I provided no prior knowledge, no context about who I am, and no explanation of how the site was built. I wanted a blind evaluation of the voice, the structure, and the "humanity" of the work. The evaluator spent time with the pages and then delivered its verdict. It praised the "anti-slop" quality of the writing. It noted the idiosyncratic taxonomy as a sign of a specific mental model. It identified what it called "pragmatic cynicism" and "contextual asymmetry" as clear markers of a human author who had actually lived through the projects described. The conclusion was definitive: the site was "unequivocally" human-derived. The only way AI could have been involved, the evaluator noted, was if someone had used an LLM to tighten up existing, very strong human drafts.&lt;/p&gt;

&lt;p&gt;Then I told it the truth. Every page on &lt;a href="https://petersalvato.com/" rel="noopener noreferrer"&gt;my site&lt;/a&gt; was compiled by the system described on the site itself. The content was never hand-drafted and polished with AI. The methodology I've spent three years building produced every sentence, every structural decision, every project description. The evaluator's response shifted: "You haven't just built a website: you've built a self-documenting compiler for identity."&lt;/p&gt;




&lt;p&gt;The source material for this compiler isn't a set of drafts. It's a raw corpus of my own thinking. Between January 2023 and early 2026, I accumulated thousands of conversations across ChatGPT, Claude Code, Gemini, and Claude.ai. Three years of thinking out loud: arguing with tools, working through complex architectural problems, explaining things to myself, and failing at things until they finally worked. This material is raw, unpolished, and entirely conversational. It was never meant to be read directly. It was meant to be compiled.&lt;/p&gt;

&lt;p&gt;I use AI as a refinery, not a generator. My conversations are the data. The system I built mines that data, evaluates it against a set of rigorous standards, and compiles it into the result you see here.&lt;/p&gt;




&lt;p&gt;That corpus exists because the system was designed to accept raw thinking. In &lt;a href="https://petersalvato.com/systems/formwork/" rel="noopener noreferrer"&gt;FormWork&lt;/a&gt;, the first accommodation is aimed at the human: get the idea out of your head with as little friction as possible. Talk, dictate, answer questions. No requirement to organize or perform. The material stays conversational, and that's the point. It carries my actual voice, my actual sentence rhythms, the imagery I reach for when I'm thinking, not writing. The tools that follow can only preserve what the source material already contains. If the input had been structured, polished, performed, the output would sound like everyone. The rawness is what carries the voice through.&lt;/p&gt;

&lt;p&gt;The pipeline that operates on this source material is four tools, each one built to solve a specific gap between human thought and machine output.&lt;/p&gt;

&lt;p&gt;First, a knowledge skill traverses the full corpus. It doesn't matter if the data is a JSON export from ChatGPT or a Markdown log from Claude Code; the skill identifies "real moments." It looks for what actually happened, what I actually said in the heat of a project, and what decisions were actually made. This is the foundation of the site's &lt;a href="https://petersalvato.com/vocabulary/fidelity/" rel="noopener noreferrer"&gt;fidelity&lt;/a&gt;. The system is specifically prevented from inventing anything. Every claim traces to a verified source in the corpus, or it gets cut. (&lt;a href="https://petersalvato.com/essays/i-needed-a-better-tool/" rel="noopener noreferrer"&gt;I Needed a Better Tool&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Second, a voice protocol makes sure the output matches how I actually talk. Most people write for publication by performing a version of themselves. They use "furthermore" and "moreover"; they "delve" into "vibrant tapestries." In my conversations, I don't talk like that. I'm matter-of-fact, occasionally cynical, and focused on specifics. The voice protocol uses a 12-item checklist to catch AI writing patterns, marketing language, and performed formality. It extracts the voice from my unguarded sessions and applies it to the compiled output. (&lt;a href="https://petersalvato.com/essays/voice-governance/" rel="noopener noreferrer"&gt;Voice Governance&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Third, the work is evaluated through multiple lenses extracted from real practitioners. These aren't "act as a designer" caricatures. They are codified evaluative frameworks built by studying the actual output and decision-making patterns of experts. We extract the questions these practitioners consistently ask of a piece of work and validate those extractions against work they actually produced. By running multiple lenses against a single dimension of the work, we surface tensions. Where the lenses agree, we have a strong signal. Where they disagree, I make the choice. (&lt;a href="https://petersalvato.com/essays/persona-extraction/" rel="noopener noreferrer"&gt;Lens Extraction&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Finally, a coordinator dispatches the entire process in parallel. Structural lenses, narrative lenses, voice checks, and baseline verifications all run at once. This is where the governance happens. If the structural layer says the engineering is sound but the narrative layer says the identity of the project is buried, the system doesn't smooth it over. It surfaces the conflict. The accumulated decisions I make to resolve those tensions are the work. (&lt;a href="https://petersalvato.com/essays/the-system/" rel="noopener noreferrer"&gt;The Integrated System&lt;/a&gt;)&lt;/p&gt;




&lt;p&gt;The thing that surprised me is how invisible it becomes when it works. The output is so consistent with my actual voice, so free of machine artifacts, and so grounded in specific details that a sophisticated AI evaluator concluded it must have been hand-written. The governance worked well enough that a reader would never guess the machinery was there.&lt;/p&gt;

&lt;p&gt;The site presents results that look hand-crafted. The machinery that produced them is described on every page, but because it's so effective at removing its own fingerprints, the reader assumes the human did the manual labor of writing. I'm still not sure how I feel about that, honestly. The whole point is that the system compiled my thinking, and the fact that it worked means nobody sees the system at all.&lt;/p&gt;




&lt;p&gt;The &lt;a href="https://petersalvato.com/systems/formwork/" rel="noopener noreferrer"&gt;FormWork&lt;/a&gt; page describes the coordination harness and its tools. Those same tools compiled that page. The voice protocol was verified against samples extracted from my own conversations. &lt;a href="https://petersalvato.com/systems/savepoint/" rel="noopener noreferrer"&gt;Savepoint Syntax&lt;/a&gt; exists because savepoints marked the cognitive turns during its own construction. And &lt;a href="https://petersalvato.com/practice/this-site/" rel="noopener noreferrer"&gt;This Site&lt;/a&gt; describes the build process that produced it. The argument for the system is the output you've been reading, which was compiled by the process it describes.&lt;/p&gt;




&lt;p&gt;The question for the next few years isn't whether AI can produce good work. It clearly can. The question is whether it can produce &lt;em&gt;your&lt;/em&gt; work: output that a blind evaluator cannot distinguish from your best hand-written thinking. That level of &lt;a href="https://petersalvato.com/vocabulary/fidelity/" rel="noopener noreferrer"&gt;fidelity&lt;/a&gt; is only possible when the constraints are yours, the source material is yours, and the governance is yours.&lt;/p&gt;

&lt;p&gt;This system took longer than writing the site by hand would have. The goal was always a tight loop between the source and the output: every sentence traces back to something real, and the methodology is proven by its own product.&lt;/p&gt;

&lt;p&gt;The evaluator called the voice "High-Taste Human." The system produced something a machine couldn't identify as machine-produced, because the machine wasn't the author. The source material was three years of me thinking out loud, and the system was built to make sure the thinking survived the compilation process. I think it did. But I'm also aware that the better the system works, the harder it is to see the system working, and I haven't fully sorted out what that means yet.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>writing</category>
      <category>webdev</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Voice Governance</title>
      <dc:creator>Peter Salvato</dc:creator>
      <pubDate>Tue, 24 Mar 2026 14:39:40 +0000</pubDate>
      <link>https://dev.to/petersalvato/voice-governance-5g6c</link>
      <guid>https://dev.to/petersalvato/voice-governance-5g6c</guid>
      <description>&lt;p&gt;Read ten AI-assisted "About" pages and you'll notice they sound identical. The same cadence, the same transitions, the same way of building to a point. Different words, same voice. The person disappears and what's left is the tool's default register.&lt;/p&gt;

&lt;p&gt;You can fix this partially with style guides, voice examples, tone specifications. The output gets better than the default, but it still won't sound like the person. It sounds like an AI doing an impression of a style guide. The reason is structural, and once I figured out why, I could build something that actually works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why it happens
&lt;/h2&gt;

&lt;p&gt;Large language models learn to write from published text. Blog posts, articles, marketing copy, documentation, books. All of it polished. All of it shaped for an audience. Published writing is a performance.&lt;/p&gt;

&lt;p&gt;The way someone writes a LinkedIn post is not how they think. The way someone writes a case study is not how they explain the same project to a friend over dinner. The rough edges, the false starts, the way someone actually arrives at an idea: gone before publication.&lt;/p&gt;

&lt;p&gt;So when you ask an AI to write "in your voice" and give it your published work as examples, you're handing it the performance. The AI learns to imitate that, and since most people's published performances converge toward the same conventions (clean transitions, parallel structure, building to a thesis), the output converges too. Different people, same register.&lt;/p&gt;

&lt;p&gt;The real voice lives in conversations. Working messages. The unguarded explanations where someone is thinking out loud instead of presenting a finished thought.&lt;/p&gt;

&lt;h2&gt;
  
  
  The conversations
&lt;/h2&gt;

&lt;p&gt;Between January 2023 and early 2026, I talked to AI tools the way I used to talk into sketchbooks. ChatGPT first, then Claude Code, then Claude.ai and Gemini alongside them. Thousands of sessions. Indexed together, tens of thousands of documents.&lt;/p&gt;

&lt;p&gt;Most of that material is me talking. Explaining problems, working through decisions, arguing with myself about naming, reacting to what the tool produced, directing implementation. The way I start sentences, the vocabulary I reach for when I'm not performing, how I describe problems (feeling first or situation first), how I transition between ideas (connectors or hard breaks), what makes me funny and what makes me frustrated.&lt;/p&gt;

&lt;p&gt;That conversational material is the actual voice. Published writing filters it out, which is why you can't use published writing as the source.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pipeline
&lt;/h2&gt;

&lt;p&gt;Tens of thousands of documents of me talking. The question was what to do with them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Voice sampling
&lt;/h3&gt;

&lt;p&gt;The voice-sample skill reads my conversation transcripts and captures observations about how I actually talk. The output isn't a style guide. It's a tuning fork. It tracks sentence rhythm (short bursts or long flowing thoughts?), opening moves ("so basically...", "the thing is...", "yeah let's..."), natural vocabulary, transitions, humor, qualifying phrases. Each session that does copy work adds observations to a persistent file, and over time it builds a detailed fingerprint.&lt;/p&gt;

&lt;p&gt;The key constraint: voice sampling reads conversations, not published pages. Published pages are the output. Conversations are the source. Conflating those is how you end up with AI voice that sounds polished but empty.&lt;/p&gt;

&lt;h3&gt;
  
  
  Governance constraint
&lt;/h3&gt;

&lt;p&gt;I had a draft of my &lt;a href="https://petersalvato.com/practice/this-site/" rel="noopener noreferrer"&gt;This Site&lt;/a&gt; page that ended with "The structure is the signal." It scanned fine. The rhythm was satisfying. I read it twice before I noticed it meant nothing. That sentence could end any page about systems, infrastructure, design, governance. It belonged to no one in particular and said nothing specific about my work. That catch became a rule: no fortune-cookie closers. Sentences that feel like insight without containing any.&lt;/p&gt;

&lt;p&gt;That's how most of these rules got built. I caught a pattern, named it, codified the prohibition.&lt;/p&gt;

&lt;p&gt;I feed the voice sample into the copywriting protocol as a hard constraint. When I write anything with the system, the rules are specific: vary sentence length (short declaratives mixed with longer causal chains), use material vocabulary (holds, breaks, drifts, scaffold, fidelity, load, joints, terrain, coherence, lock), open with a real moment rather than a concept, show action and consequence rather than description and explanation.&lt;/p&gt;

&lt;p&gt;Every rule is enforceable and has a specific origin. "Zero em dashes" because they accumulate into a rhythm that belongs to no specific person. "Every negation-affirmation pattern has to be earned" because that pattern ("Not X. Y.") is the single most common AI writing tell and most people don't notice it until you point it out. The test: does the negation correct a genuine misunderstanding the reader would actually have? If you're using it for emphasis or contrast, rewrite it. Every rule in the protocol got built the same way: I caught it in real output, named what was wrong, and wrote a prohibition specific enough that I couldn't rationalize my way past it later.&lt;/p&gt;

&lt;h3&gt;
  
  
  The 12-item checklist
&lt;/h3&gt;

&lt;p&gt;Before anything publishes, a copy-verify skill runs a 12-item pass/fail check. Twelve items sounds like a lot. Most of them are fast. A few do real work.&lt;/p&gt;

&lt;p&gt;Item 7 checks whether every "Not X. Y." pattern is earned. That pattern is the single most common tell in AI-assisted copy. "Not a portfolio. A world." "Not documentation. A living system." The structure feels insightful because the rhythm implies a correction, but most of the time there's no genuine misunderstanding being corrected. The negation is doing emphasis work, not clarification work. The rule: if the reader wouldn't actually think X before you said Y, rewrite it. On early drafts of my project pages, item 7 flagged three or four instances per page. Every one of them was emphasis disguised as insight.&lt;/p&gt;

&lt;p&gt;Item 4 is the simplest and catches the most. Zero em dashes. AI writing defaults to em dashes the way spoken English defaults to "like." One draft of a project page had eleven. They all looked fine individually. Together they created a rhythm that belonged to no specific person. The rule is binary: zero, not fewer.&lt;/p&gt;

&lt;p&gt;Item 10 asks whether the copy feels like a room. Would it feel wrong on someone else's site? This is the identity coherence check. A page can pass every mechanical rule and still read like competent generic copy. Item 10 catches the kind of writing that could belong to anyone with similar credentials.&lt;/p&gt;

&lt;p&gt;The remaining nine items cover the rest of the surface: does the opener land with a stranger, does it pass the Grip Test, are details traceable to verified sources, zero banned words (paradigm, leverage, passionate, innovative, synergy, empower, journey, transformative), no fortune-cookie closers, no ungrounded metaphors, no personification of tools, frontmatter matching body, and index entries staying current.&lt;/p&gt;

&lt;p&gt;Eleven of the twelve items are mechanical. They require checking specific, verifiable conditions. That's by design: governance works when the checks are concrete enough that you can't rationalize your way past them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Grip Test
&lt;/h2&gt;

&lt;p&gt;Item 2 is not mechanical. It asks whether the writing passes the Grip Test at Grip or Lock level, but answering that requires judgment about a stranger's experience. You already understand the work, so you can't tell whether a stranger would.&lt;/p&gt;

&lt;p&gt;I named it after my friend Ben. I showed him an early draft of the &lt;a href="https://petersalvato.com/systems/savepoint/" rel="noopener noreferrer"&gt;SavePoint Syntax&lt;/a&gt; page and asked what he thought. He said he could get "a fingernail hold" on it. He could tell it was something, but he couldn't feel why it mattered. That phrase became the standard.&lt;/p&gt;

&lt;p&gt;The Grip Test has three ratings:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fingernail:&lt;/strong&gt; The reader can see it's something but can't feel why it matters. They'd leave the page without a clear reason to care.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grip:&lt;/strong&gt; The reader feels the problem even if they haven't lived it. They understand the stakes through the writing alone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lock:&lt;/strong&gt; The reader recognizes their own experience in what they're reading. The writing connects to something they already know but maybe haven't articulated.&lt;/p&gt;

&lt;p&gt;Every page should land at Grip or Lock for a stranger. If a page sits at Fingernail, the opening isn't working. The test also identifies the "in": what shared human experience does this connect to? Losing something you can't get back? Building something nobody asked for? The gap between what you know and what you can prove?&lt;/p&gt;

&lt;h2&gt;
  
  
  What this solves
&lt;/h2&gt;

&lt;p&gt;The pipeline constrains the space of possible output so the things AI writing typically gets wrong are caught before they ship. The voice sample sets the tuning reference. The governance rules knock out the most common failure modes. The checklist catches what slips through. Voice still requires a person. What the pipeline does is keep the tool from overwriting that person with its default register.&lt;/p&gt;

&lt;p&gt;Every published page on &lt;a href="https://petersalvato.com/" rel="noopener noreferrer"&gt;my site&lt;/a&gt; has been through this pipeline. Read them back to back and you'll hear a specific person with real opinions and a specific way of getting to the point. That's the thing I was trying to protect.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>writing</category>
      <category>designsystems</category>
      <category>productivity</category>
    </item>
    <item>
      <title>The IEP for AI Systems</title>
      <dc:creator>Peter Salvato</dc:creator>
      <pubDate>Tue, 24 Mar 2026 14:39:14 +0000</pubDate>
      <link>https://dev.to/petersalvato/the-iep-for-ai-systems-39nn</link>
      <guid>https://dev.to/petersalvato/the-iep-for-ai-systems-39nn</guid>
      <description>&lt;p&gt;I taught a self-contained 4/5 bridge class in Sunset Park, Brooklyn. Twelve kids, every subject, every accommodation, every IEP goal. Self-contained means there's no other teacher running the plan. You are the plan. You build it, run it, and adjust it in real time when it falls apart at 10:15 on a Tuesday because the thing that worked yesterday doesn't work today.&lt;/p&gt;

&lt;p&gt;The same problem keeps showing up everywhere I work. Take something too complex for the system receiving it, decompose it into pieces the system can actually process, build structure to hold the pieces in relation, and make sure they produce a coherent result when they come back together. Construction sites, print shops, enterprise platforms, AI skill architectures. The classroom is where I learned it.&lt;/p&gt;

&lt;p&gt;An IEP is an Individualized Education Program. Federal law requires one for every student receiving special education services. It specifies what the student needs, how progress gets measured, what accommodations are required, and what success looks like for that specific learner. Specific to the individual learner, not the class or the grade level.&lt;/p&gt;

&lt;p&gt;In a self-contained classroom, you're running twelve of these simultaneously. Twelve different sets of goals, twelve different accommodation profiles, twelve different definitions of progress. The class moves forward together, but the path through the material is individualized per student.&lt;/p&gt;

&lt;p&gt;You learn three things fast in that room. All three turned out to be the same things I use to architect AI systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decompose or fail
&lt;/h2&gt;

&lt;p&gt;A fourth grader with processing delays can't receive a multi-step math problem as a single instruction. "Solve for the missing number, show your work, and explain your reasoning" is three tasks disguised as one. The student hears the first instruction, starts working, and the second two are gone.&lt;/p&gt;

&lt;p&gt;Task decomposition means breaking that into discrete steps. Each step has one clear objective. Each step produces a visible result before the next step begins. The student isn't doing less work. The work is sequenced so each piece is achievable on its own.&lt;/p&gt;

&lt;p&gt;That's a structural decision about how complex work gets delivered to a system that can't process it whole. That sentence describes a fourth grader in Sunset Park. It also describes a large language model receiving a compound evaluation prompt.&lt;/p&gt;

&lt;p&gt;A monolithic prompt that says "evaluate this portfolio for voice quality, structural integrity, narrative coherence, and brand alignment" is four tasks disguised as one. The model receives the first criterion, starts working, and the others drift. Output quality degrades as the instruction gets longer. &lt;a href="https://petersalvato.com/vocabulary/context/" rel="noopener noreferrer"&gt;Context gets polluted&lt;/a&gt;. The model can't hold all four evaluation frames simultaneously, so it collapses them into a blended average that's none of the four.&lt;/p&gt;

&lt;p&gt;The move that works in the classroom works here too. Pull the compound apart. Give each evaluation dimension its own skill, its own objective, its own visible result before the next one fires.&lt;/p&gt;

&lt;p&gt;I call the single-purpose skills "atomics." Each one does one thing. &lt;a href="https://petersalvato.com/systems/lensarray/" rel="noopener noreferrer"&gt;LensArray&lt;/a&gt; runs them as separate diagnostics: one lens tests for structural restraint (extracted from Vignelli's body of work), another tests for narrative identity (extracted from Victore's), another tests for whether a stranger would understand this in sixty seconds. They don't know about each other. They don't need to. Their job is to measure one thing accurately.&lt;/p&gt;

&lt;p&gt;The operation is identical to the classroom. Break the complex task into pieces the system can hold. Let each piece produce a clear result. Sequence them so nothing gets lost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaffold, then remove
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://petersalvato.com/vocabulary/scaffold/" rel="noopener noreferrer"&gt;Scaffolding&lt;/a&gt; in special education is temporary support structure. You provide it while the student is building competence, and you remove it as the competence solidifies. A graphic organizer helps a student plan a paragraph. Once the student can plan without the organizer, the organizer goes away. If you leave the scaffold in place permanently, you've built a dependency, not a skill.&lt;/p&gt;

&lt;p&gt;The coordinator pattern in my skill architecture works the same way. A coordinator is a thin orchestration layer that dispatches atomic skills, collects their results, and synthesizes a verdict. The coordinator doesn't do the evaluation. It manages the flow. The dispatch rules determine which atomics run in parallel (because they're independent) and which run in sequence (because one depends on another's output).&lt;/p&gt;

&lt;p&gt;The coordinator is scaffolding. It holds the structure while the pieces do the work. If I hardcoded the synthesis logic into the atomics themselves, each one would need to know about all the others. They'd be coupled. Change one and you'd break three. Instead, the coordinator carries the structural knowledge. The atomics stay simple, single-purpose, and independently testable.&lt;/p&gt;

&lt;p&gt;When I built the audit coordinator, it dispatches nine evaluation lenses in parallel. Each lens runs independently. The coordinator collects nine separate verdicts and identifies where they agree and where they contradict. That contradiction is the valuable signal. Two lenses scoring the same work differently means there's a real tension to resolve. The coordinator surfaces it. The atomics just measured.&lt;/p&gt;

&lt;p&gt;A well-designed classroom works the same way. You hold the structure while the learners do the work, and you take it down once you see they don't need it anymore.&lt;/p&gt;

&lt;h2&gt;
  
  
  Individualize the criteria
&lt;/h2&gt;

&lt;p&gt;The hardest part of running twelve IEPs simultaneously isn't the paperwork. It's that success looks different for every student. One student's goal is writing a complete sentence. Another student's goal is writing a paragraph with a topic sentence and supporting detail. They're sitting next to each other. They're working on the same assignment. The criteria for "done" are completely different.&lt;/p&gt;

&lt;p&gt;This is exactly the problem with evaluating creative work. "Is this portfolio good?" is not a meaningful question. Good by whose criteria? Structural lenses say the grid is clean and the typography is consistent. Narrative lenses say it feels like it could be anyone's site. Both verdicts are correct. They're measuring against different criteria.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://petersalvato.com/systems/lensarray/" rel="noopener noreferrer"&gt;LensArray&lt;/a&gt; handles this the way an IEP handles a classroom. Each evaluation lens has its own criteria, its own definition of success, its own pass/fail threshold. The lenses don't vote. They don't average. They each produce an independent verdict against their own standard. The convergence analysis (where lenses agree) and the contradiction analysis (where they disagree) are both useful outputs. A consensus means the work is solid on that dimension. A contradiction means there's a design decision to make.&lt;/p&gt;

&lt;p&gt;In the classroom, when two IEP goals conflicted (one student needed quiet, another needed verbal processing), the resolution was a structural decision: where to seat them, when to schedule which activity, how to create pockets of different conditions within one room. The conflict wasn't a problem. It was information about what the room needed to accommodate.&lt;/p&gt;

&lt;p&gt;Evaluation lenses work the same way. When structural and narrative lenses disagree, the disagreement tells me what decision I need to make as the designer. That's often the most useful thing the system gives me.&lt;/p&gt;

&lt;h2&gt;
  
  
  One lane
&lt;/h2&gt;

&lt;p&gt;The AI governance conversation is dominated by two groups. Computer scientists who think about model architecture. Business strategists who think about risk and compliance. Neither group has stood in a room where the system you're managing is twelve kids with twelve different definitions of success, and the feedback loop is immediate because a ten-year-old will tell you in real time when your scaffolding isn't working.&lt;/p&gt;

&lt;p&gt;Special education teaches you that complex systems with variable inputs need individualized evaluation criteria, temporary support structures, and task decomposition. I learned that in a classroom where the feedback was a ten-year-old shutting down in front of me, not a degraded BLEU score.&lt;/p&gt;

&lt;p&gt;Every IEP is a governance document. It specifies what gets measured, how it gets measured, what accommodations the system provides, and what success looks like. It gets reviewed. It gets updated. It gets enforced by federal law because the stakes are that high.&lt;/p&gt;

&lt;p&gt;I didn't study AI governance and then discover it maps to pedagogy. I spent a year in a self-contained classroom in Sunset Park, then spent eighteen years applying the same structural patterns to enterprise platforms, brand systems, and design evaluation. When I started building AI skill systems, the architecture was already in my hands.&lt;/p&gt;

&lt;p&gt;Decompose the complex task. Scaffold the structure. Individualize the criteria. Monitor progress against specific goals. Adjust when the feedback says your plan isn't working.&lt;/p&gt;

&lt;p&gt;But underneath all of that is a simpler move. Before I built any of the architecture, before I decomposed a single prompt, I asked the same question I asked about every student in that classroom: what does this system require from me to succeed at this task? Not what do I want from it. What does it need from me.&lt;/p&gt;

&lt;p&gt;That question applies in both directions. The model needs decomposed tasks, structured input, independent evaluation. The human needs friction removed at the point of capture, so the raw thinking enters the system intact. In &lt;a href="https://petersalvato.com/systems/formwork/" rel="noopener noreferrer"&gt;FormWork&lt;/a&gt;, that means: get the idea out of your head, talk, dictate, answer questions, and let the tools handle the rest. The accommodation runs both ways.&lt;/p&gt;

&lt;p&gt;Once you start asking that question, it changes how you see the whole field. It's the difference between treating a model like an employee who should be better and treating it like a system with a specific processing reality that you can design around. Token limits are a working memory &lt;a href="https://petersalvato.com/vocabulary/processing-profile/" rel="noopener noreferrer"&gt;processing profile&lt;/a&gt;. Context windows are the attention span you're designing for. That reframe is where the IEP training actually lives, more than any specific technique. The techniques follow once you stop expecting the system to compensate for your lack of preparation.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>education</category>
      <category>designsystems</category>
    </item>
    <item>
      <title>What I Built in 2025</title>
      <dc:creator>Peter Salvato</dc:creator>
      <pubDate>Tue, 24 Mar 2026 14:38:47 +0000</pubDate>
      <link>https://dev.to/petersalvato/what-i-built-in-2025-518o</link>
      <guid>https://dev.to/petersalvato/what-i-built-in-2025-518o</guid>
      <description>&lt;p&gt;In February I read my site top to bottom as a visitor and found a fabricated claim. A sentence about work I never did, written in my voice, that sounded specific and grounded. It had survived every quality check I had. Five independent tools had evaluated the page. All five passed it. The sentence was still wrong.&lt;/p&gt;

&lt;p&gt;That failure produced the last tool I built this year. But it only makes sense if you see the failures that came before it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The dump (realized last, existed first)
&lt;/h2&gt;

&lt;p&gt;Everything downstream depends on getting raw thinking into the system without friction. I didn't understand this until after I'd built the tools that process the material. Looking back, I'd been dumping for three years before I recognized it as a practice: thousands of sessions of thinking out loud, dictating on drives, brainstorming at 2 AM. That raw material carried my actual voice, my actual sentence structure, the way I actually move between ideas. Every tool I built afterward draws from it.&lt;/p&gt;

&lt;h2&gt;
  
  
  SavePoint
&lt;/h2&gt;

&lt;p&gt;Context kept evaporating across sessions. I'd have a breakthrough, the session would close, and the reasoning that connected everything would disappear. The next session started from zero. I started typing "give me a savepoint" before conversations ended. Same instinct as saving a video game before a boss fight: you're about to lose your progress, so you dump your state. That survival reflex became &lt;a href="https://petersalvato.com/systems/savepoint/" rel="noopener noreferrer"&gt;SavePoint&lt;/a&gt;. It preserved the thinking across session boundaries so the next session could pick up where the last one left off.&lt;/p&gt;

&lt;h2&gt;
  
  
  Voice protocol
&lt;/h2&gt;

&lt;p&gt;The output stopped sounding like me. The tools were producing competent prose that could have been anyone's. LLMs learn from published writing, and published writing is performance. My real voice is in conversation transcripts: rough, full of false starts, full of direction changes. Twenty-five years of design practice taught me to hear when something drifts from the original intent. The voice protocol externalized that ear so it could run at scale, and it governs every sentence on this site.&lt;/p&gt;

&lt;h2&gt;
  
  
  LensArray
&lt;/h2&gt;

&lt;p&gt;"Is this good?" was never one question. It was twelve questions collapsed into one, and the answer was always vague. I needed evaluation decomposed into independent dimensions. Millman's criteria for authenticity. Bierut's criteria for whether design is solving a problem or decorating one. Shaw's test for whether something feels like a world. Extracted from real practitioners' bodies of work, codified as testable diagnostics. They disagree with each other. The disagreements are where the real decisions live.&lt;/p&gt;

&lt;h2&gt;
  
  
  The coordination layer
&lt;/h2&gt;

&lt;p&gt;Individual tools, each working correctly, producing wrong results. That hallucinated attribution was the proof. Seventeen diagnostic skills that each do one thing. Coordinators that dispatch them in parallel. Convergence that surfaces where they agree and disagree. My job is to read the disagreements and decide which value wins. The coordination layer exists because correctness at the component level doesn't guarantee anything at the system level, and I think I learned that on construction sites before I learned it here.&lt;/p&gt;

&lt;p&gt;Each tool exists because the previous one failed or was not enough. The dump gave the tools material. &lt;a href="https://petersalvato.com/systems/savepoint/" rel="noopener noreferrer"&gt;SavePoint&lt;/a&gt; kept the material from evaporating. The voice protocol kept the output honest. &lt;a href="https://petersalvato.com/systems/lensarray/" rel="noopener noreferrer"&gt;LensArray&lt;/a&gt; decomposed evaluation into real decisions. The coordination layer caught what the individual tools could not. The methodology behind all of it is documented on the &lt;a href="https://petersalvato.com/systems/formwork/" rel="noopener noreferrer"&gt;FormWork&lt;/a&gt; page.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>designsystems</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
