<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: 고광웅</title>
    <description>The latest articles on DEV Community by 고광웅 (@ernham).</description>
    <link>https://dev.to/ernham</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ernham"/>
    <language>en</language>
    <item>
      <title>Cross-posting to Three Platforms Forced Me to Rethink What 'The Same Post' Means</title>
      <dc:creator>고광웅</dc:creator>
      <pubDate>Thu, 23 Apr 2026 14:18:43 +0000</pubDate>
      <link>https://dev.to/ernham/cross-posting-to-three-platforms-forced-me-to-rethink-what-the-same-post-means-4l19</link>
      <guid>https://dev.to/ernham/cross-posting-to-three-platforms-forced-me-to-rethink-what-the-same-post-means-4l19</guid>
      <description>&lt;h2&gt;
  
  
  The Simple Mental Model That Failed
&lt;/h2&gt;

&lt;p&gt;Last week I finished writing an essay for this newsletter. I wrote it once. I published it three times — Substack (the original), Dev.to (English cross-post), Tistory (Korean rewrite).&lt;/p&gt;

&lt;p&gt;My mental model going in was: &lt;em&gt;write once, redistribute.&lt;/em&gt; Source of truth on Substack, automation handles the rest. The posts on Dev.to and Tistory are "the same post," just on different platforms.&lt;/p&gt;

&lt;p&gt;By the end of the week I had written five essays, run the full three-channel pipeline on each one, and learned something I probably should have anticipated: &lt;strong&gt;there is no such thing as "the same post."&lt;/strong&gt; Each platform rendered the same starting content into a different object, and the pipeline only started working when I accepted that and built for each platform's native primitive.&lt;/p&gt;

&lt;p&gt;This post is what I noticed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What "Primitive" Means Here
&lt;/h2&gt;

&lt;p&gt;Each platform has an atomic unit the reader is actually consuming. Get the unit right, the rest of the post works. Get it wrong, and the post technically publishes but lands flat.&lt;/p&gt;

&lt;p&gt;For the three platforms I ran this week:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Substack's primitive is the sentence, delivered to an email inbox.&lt;/strong&gt; A subscriber opens their inbox in the morning. Their posture is sit-and-read. They committed to hearing from this author in this voice. What matters: the opening hook that survives "do I open this email?", the rhythm of the prose, the feeling of being written &lt;em&gt;to&lt;/em&gt; rather than &lt;em&gt;at&lt;/em&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dev.to's primitive is the snippet, discovered in a feed.&lt;/strong&gt; A developer opens Dev.to during a work break and scrolls. Their posture is scan-first, save-for-later, read-if-it-looks-useful. What matters: the first-paragraph payoff, the code blocks that can be screenshotted, the tags that make the post findable a week later.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tistory's primitive is the local context, arrived at via Naver search.&lt;/strong&gt; A Korean reader searches for something like "AI agent persona" in Korean and clicks the third result. Their posture is compare-with-other-sources, skim-for-answer. What matters: Korean sentence rhythm that doesn't feel translated, reference points a Korean builder would recognize, canonical URL so the SEO credit routes correctly.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;"The same post" collapsed those three primitives into one thing in my head. The pipeline I built assumed that's what they were. It wasn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where the Pipeline Broke
&lt;/h2&gt;

&lt;p&gt;The breaks weren't dramatic. They were a series of small misfires that, taken together, forced me to redesign each platform's adaptation step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Substack's default avatar became my cover image, once.&lt;/strong&gt;&lt;br&gt;
When an already-published Substack post has no cover image set, the social-card &lt;code&gt;og:image&lt;/code&gt; pulls the generic publication avatar — a small gray sphere that, scaled up to 1456×819, looks like an out-of-focus moon. I tried retroactively updating the cover via the API; the draft state updated, but the public page rendering didn't. Substack appears to snapshot the post's social card at first publish. The cover decision has to happen &lt;em&gt;before&lt;/em&gt; publish. I added cover generation to the pre-publish pipeline after losing one post's cover to this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dev.to rejected my default Python User-Agent with "Forbidden Bots."&lt;/strong&gt;&lt;br&gt;
The Dev.to API works fine with a valid token — but only if the request includes a non-default User-Agent header. &lt;code&gt;python urllib&lt;/code&gt; sends &lt;code&gt;Python-urllib/X.Y&lt;/code&gt; by default, and Cloudflare in front of Dev.to returns 403 for that string. The fix is a one-line header addition, but it cost me an hour of debugging "why is my valid token being rejected?" when the problem was never the token.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tistory's editor has two editors, and I was writing to the wrong one.&lt;/strong&gt;&lt;br&gt;
The Tistory write page renders a TinyMCE iframe as the main editor. A CodeMirror instance also exists in the DOM as a backup for HTML-mode users. I found CodeMirror first and wrote my injected content there. The UI saved the TinyMCE content, which was empty. I got eleven empty posts published as drafts before I noticed. Now the injection routine tries TinyMCE's &lt;code&gt;activeEditor.setContent(html)&lt;/code&gt; first, and CodeMirror is a fallback for HTML-mode users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tistory's visibility radio was clicking but not selecting.&lt;/strong&gt;&lt;br&gt;
The publish modal has three visibility radio labels — public, public-protected, private — each written in Korean. My Playwright script searched for the Korean word for "public" and clicked the first match. The click registered in the event log. The saved post was still private. The issue was that my selector was finding the first instance of the word — which turned out to be a header label elsewhere on the page, not the radio option. Fixing it meant scoping the text search to the modal and preferring &lt;code&gt;[role="radio"]&lt;/code&gt; / &lt;code&gt;&amp;lt;label&amp;gt;&lt;/code&gt; elements with exact text match.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Korean translation came out technically correct but translation-y.&lt;/strong&gt;&lt;br&gt;
The first-pass Korean rewriter produced grammatically fine Korean that a native reader would immediately clock as translated. Common patterns — awkward passive constructions, the same noun-phrase ending repeated across three consecutive sentences, English word order showing through in Korean syntax. I added a second-pass editor that takes the first pass as input and specifically targets these translation-y patterns. I also added a smell score — a cheap regex-based heuristic counting six known patterns — so I could measure whether the second pass was actually improving output. On one post the first pass scored 14, the second scored 3. On another post, the first pass was already clean (scored 3), and the editor responsibly left it alone. I take the second result as more important than the first: the pipeline knows when not to edit.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Each Platform Actually Needed
&lt;/h2&gt;

&lt;p&gt;Once I accepted "same content" was a category error, the adaptations started looking like three different products.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Substack adaptation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Title treated as email subject line, not blog headline. Punchy, specific, survives inbox clutter.&lt;/li&gt;
&lt;li&gt;Cover image generated &lt;em&gt;before&lt;/em&gt; first publish. Typographic, dark navy with accent color, consistent series aesthetic.&lt;/li&gt;
&lt;li&gt;Paywall section markers — none yet, but placeholder structure so I can add them later without rewriting.&lt;/li&gt;
&lt;li&gt;Markdown → ProseMirror node tree (Substack's body format isn't raw markdown; the API needs it serialized to their doc structure).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Dev.to adaptation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tags normalized to lowercase alphanumeric (Dev.to strips anything else).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;canonical_url&lt;/code&gt; pointed to the Substack post so search engines credit the original.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;main_image&lt;/code&gt; sourced from the already-uploaded Substack CDN URL. Dev.to has no image upload API — their parser fetches external URLs — so reusing Substack's CDN saved a redundant upload step.&lt;/li&gt;
&lt;li&gt;Filter in the image URL extractor skips the Substack subscribe-card avatar so Dev.to doesn't pick up a blurry placeholder as the hero image.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tistory adaptation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Two-pass Korean rewrite, with the second pass measurable and skippable if the first pass is already clean.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;canonical&lt;/code&gt; (as a blockquote link in the body) pointing back to Substack. Naver ignores canonical URLs for ranking, but Google still respects them, so the tag is more about cross-platform SEO hygiene than Naver ranking.&lt;/li&gt;
&lt;li&gt;Netscape cookies merged from &lt;code&gt;tistory.com&lt;/code&gt; and &lt;code&gt;kakao.com&lt;/code&gt; (Tistory auth goes through Kakao).&lt;/li&gt;
&lt;li&gt;Idempotency check: if &lt;code&gt;blog_drafts.tistory_url&lt;/code&gt; is already set, skip re-publish unless &lt;code&gt;--force&lt;/code&gt;. Eleven duplicate test posts taught me this one.&lt;/li&gt;
&lt;li&gt;Playwright UI automation as the transport layer. Tistory's Open API shut down in February 2024 and isn't coming back.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Platforms Where I Stopped
&lt;/h2&gt;

&lt;p&gt;Medium was on my list. I wrote the publisher module, added the env token field, and then discovered the Integration Token feature now requires Medium Partner Program membership — which isn't available to all accounts. Shipping it behind a paywall wasn't worth it for one channel.&lt;/p&gt;

&lt;p&gt;Naver Blog has an official OpenAPI, but the content format is constrained enough (limited HTML, external link penalties) that automating it would require another rewrite pass — a third-pass "Naver blog format" rewriter. That's on the list for later, not this week.&lt;/p&gt;

&lt;p&gt;I note both of these because the multi-platform question isn't just "which platforms work?" It's "which platforms are worth the adaptation cost?" A platform with a non-trivial rewrite pass costs more than the same reach on a platform that already speaks my primitive.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Actually Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Same content is a category error.&lt;/strong&gt; The words are the same input. The artifact produced by each platform is a different object. Treating them as one job — "publish the post everywhere" — hides the fact that each adaptation is non-trivial and each platform's primitive is different.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pipelines should speak each platform's native language.&lt;/strong&gt; A Tistory-shaped post is not a machine-translated Substack post. It's a different artifact with different idioms, different reader context, different SEO concerns. The pipeline that glues them has to make the translation at the platform's level, not the language's level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Measure the adaptations, not just the publishes.&lt;/strong&gt; I almost shipped a Korean rewrite that was technically fluent but read as translated. The only reason I caught it was the smell-score regex I added as a sanity check. The pipeline's quality gate has to be at the &lt;em&gt;rendered output&lt;/em&gt;, not at the API status code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;APIs die. Plan for UI automation.&lt;/strong&gt; Medium and Tistory both used to have Open APIs that worked. Neither does now. Playwright-based publishing is uglier than API-based, but it survives policy changes that break APIs. Anything publishing-adjacent should have a Playwright fallback path.&lt;/p&gt;

&lt;h2&gt;
  
  
  For Other Builder-Writers Considering Multi-Platform
&lt;/h2&gt;

&lt;p&gt;Three things I'd do differently if I were starting this week rather than ending it:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Write down each platform's primitive before adapting for it.&lt;/strong&gt;&lt;br&gt;
What is the reader's posture? Where did they arrive from? What format does the platform natively render well? The answer to those three questions determines most of the adaptation work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Build idempotency from the first post, not the twelfth.&lt;/strong&gt;&lt;br&gt;
I published eleven duplicates on Tistory before I added "if already published, skip" logic. Ten minutes of upfront design would have prevented ninety minutes of cleanup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Treat each platform's automation as a separate product with its own failure modes.&lt;/strong&gt;&lt;br&gt;
The Dev.to 403, the Substack cover re-render quirk, the Tistory editor ambiguity — none of these share a root cause. They each required platform-specific debugging. Pretending the automation is one system creates the illusion of a single code path where there are actually three.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Close
&lt;/h2&gt;

&lt;p&gt;The instinct to cross-post from one source to many channels is correct. The hidden cost is the adaptation work that you don't see until you ship. Same words in, different objects out.&lt;/p&gt;

&lt;p&gt;After a week of running this in anger, my conclusion is that cross-posting is really &lt;em&gt;cross-rendering.&lt;/em&gt; The same source, rendered by different primitives, into different platforms' native formats. The pipeline that makes this pleasant to run respects each platform's primitive rather than forcing uniformity across them.&lt;/p&gt;

&lt;p&gt;If your source content is generic enough that the rendering difference doesn't matter — short announcements, product launches, pull-quotes — the naive approach works. For anything essay-length, opinion-driven, or audience-differentiated, the primitive shift is real, and the pipeline has to know about it.&lt;/p&gt;

</description>
      <category>crossposting</category>
      <category>contentstrategy</category>
      <category>platformautomation</category>
      <category>seo</category>
    </item>
    <item>
      <title>I Wrote Four Posts. Then I Let Them Decide My Roadmap. Here's Why I Stopped.</title>
      <dc:creator>고광웅</dc:creator>
      <pubDate>Wed, 22 Apr 2026 13:28:26 +0000</pubDate>
      <link>https://dev.to/ernham/i-wrote-four-posts-then-i-let-them-decide-my-roadmap-heres-why-i-stopped-3lgc</link>
      <guid>https://dev.to/ernham/i-wrote-four-posts-then-i-let-them-decide-my-roadmap-heres-why-i-stopped-3lgc</guid>
      <description>&lt;h2&gt;
  
  
  The Week That Felt Productive
&lt;/h2&gt;

&lt;p&gt;Last week I published four posts on this newsletter. They were observation pieces about AI agent products — what current platforms get wrong about persona, about conversation-quality monitoring, about the relationship between AI and runtime, about optimizing for proxy metrics instead of real outcomes.&lt;/p&gt;

&lt;p&gt;Each post stood on its own. They also connected — four angles on the same underlying claim: &lt;em&gt;most AI agent products today ship simple primitives (static personas, per-call logs, workflow builders) when the more accurate primitives are runtime-level (situational steering, trajectory-level observability, content-aware design).&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;By the fifth day, I noticed I wasn't just writing. I was building a worldview.&lt;/p&gt;

&lt;p&gt;Yesterday I opened a new document to plan product direction for the agent platform I run. Within twenty minutes I had drafted a framework with four improvement tracks, each mapped one-to-one onto those four posts. The plan felt right. It was logically consistent, evidence-backed, and matched my product's current architecture.&lt;/p&gt;

&lt;p&gt;I asked the person I work with most closely — who sometimes functions as my critical second opinion — whether this was the right direction.&lt;/p&gt;

&lt;p&gt;The answer: &lt;em&gt;technically sound, strategically premature.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That stopped me. And it taught me something I want to write down before I forget it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trap Has a Name
&lt;/h2&gt;

&lt;p&gt;The pattern I walked into isn't new. It has a name. I'll call it &lt;strong&gt;content-to-product alignment trap&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It works like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You write essays about how the world should be.&lt;/li&gt;
&lt;li&gt;The essays are logically coherent and emotionally satisfying to produce.&lt;/li&gt;
&lt;li&gt;You build a worldview from the act of writing them.&lt;/li&gt;
&lt;li&gt;You then try to build your product around that worldview.&lt;/li&gt;
&lt;li&gt;You mistake the coherence of the worldview for evidence that the product is right.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The coherence is real. But coherence is not demand. Your essays are a theory of what's broken in the industry. Your product has to solve what's actually breaking for the users you're trying to reach. These can overlap. They can also diverge entirely.&lt;/p&gt;

&lt;p&gt;Here's the part I didn't expect: &lt;em&gt;the trap gets stronger when you write a series.&lt;/em&gt; A single essay is easier to hold at arm's length. Four essays in a week, all reinforcing the same thesis, feel like a position paper. The more consistent the series, the more the author mistakes its internal consistency for external validity.&lt;/p&gt;

&lt;p&gt;I wrote posts that argued AI agent platforms should have situation-aware personas, trajectory-level observability, content-first design, and outcome-based evaluation. Each claim is defensible. But I have no user interview data showing my actual users are blocked on any of those four problems. I just wrote about them compellingly, and four posts later I was ready to bet a roadmap on them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Fails Quietly
&lt;/h2&gt;

&lt;p&gt;The tricky part of this trap is that it doesn't feel like a mistake while you're in it.&lt;/p&gt;

&lt;p&gt;When you build a product based on real user pain, there's friction. Users complain, things don't make sense, hypotheses get falsified. The dissonance is productive. You update.&lt;/p&gt;

&lt;p&gt;When you build a product based on your own essays, there's no dissonance. You wrote the essays. You agree with yourself. Every decision confirms the worldview you already built. The feedback loop collapses into a loop of one.&lt;/p&gt;

&lt;p&gt;And because the essays are public, there's a second reinforcing mechanism: &lt;em&gt;public commitment.&lt;/em&gt; You've told readers the world is shaped a certain way. Pivoting your product away from that shape can feel like a reputational cost. So you don't pivot. You build.&lt;/p&gt;

&lt;p&gt;The product that results might even be well-engineered. It will just be well-engineered for the wrong problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Evidence I Wasn't Using
&lt;/h2&gt;

&lt;p&gt;When I sat with the pushback, I made a small but important list.&lt;/p&gt;

&lt;p&gt;Users of my platform don't write to me saying "my agent's conversation drifts structurally and we lack trajectory-level observability." They write to me saying things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it's too slow&lt;/li&gt;
&lt;li&gt;this integration broke&lt;/li&gt;
&lt;li&gt;the output missed the point&lt;/li&gt;
&lt;li&gt;I can't figure out what the agent actually did&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These aren't the same problems I wrote about this week. There's a relationship — trajectory observability would help diagnose "can't figure out what the agent did" — but "relationship" is not "direct match."&lt;/p&gt;

&lt;p&gt;When I mapped my four-track plan against these actual complaints, I found that &lt;strong&gt;one of the four tracks directly addressed a user-facing pain, and the other three were internal quality infrastructure.&lt;/strong&gt; The three would help my team operate better. They wouldn't be felt by users unless I explicitly surfaced them.&lt;/p&gt;

&lt;p&gt;I had written a roadmap where 75% of the work was invisible to the users it was supposedly for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Author Voice vs. Builder Voice
&lt;/h2&gt;

&lt;p&gt;The fix isn't to stop writing about product observations. The fix is to recognize that the author and the builder are different roles, and they need to hear different things before they commit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;As an author&lt;/strong&gt;, I'm allowed — encouraged — to write from observation. I can say "here's what I notice about the shape of this category" without being accountable to whether the observation solves anyone's concrete problem. Essays are for sense-making, hypothesis-forming, provocation. They don't need user research to be valuable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;As a builder&lt;/strong&gt;, I'm not allowed to skip user research. I don't get to substitute my essays for it. A product decision grounded in "I wrote about this and it felt true" is a decision grounded in one person's sense-making. One person's sense-making is a terrible distribution to sample user need from.&lt;/p&gt;

&lt;p&gt;The trap is that these two voices live in the same head. The essay I wrote yesterday becomes the premise for the product decision I make today. Unless I consciously split the two, they blur.&lt;/p&gt;

&lt;p&gt;In my case the conscious split looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Essays go on Substack. They're observations. They commit me to thinking publicly, not to building accordingly.&lt;/li&gt;
&lt;li&gt;Roadmap decisions go through a different filter: user interviews, retention data, competitive positioning, pricing experiments. Essays can be an input to that filter, but not the filter itself.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I had collapsed those two tracks last week without noticing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Counter-Example I Should Have Learned From Earlier
&lt;/h2&gt;

&lt;p&gt;There's an irony here. One of the posts I wrote last week was about Claude Design — the product Anthropic launched, which I analyzed by reading source code of the skill bundles it produces. The deepest claim of that piece was: &lt;em&gt;Claude Design is mostly clever runtime engineering with a model at the entry point. The runtime is the product.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;What I missed, while writing enthusiastically about that claim, is how Claude Design actually got built. Anthropic didn't start by writing essays about "the AI design category needs a runtime layer." They built a product that users wanted (fast, good-looking design artifacts), and the runtime emerged as the architecture that made that product work well.&lt;/p&gt;

&lt;p&gt;Runtime-as-product is a retrospective description of Claude Design. It is not a prescription for how to decide what to build.&lt;/p&gt;

&lt;p&gt;If I had actually internalized that lesson, my own plan wouldn't have led with "let's build runtime infrastructure because it's the more accurate primitive." It would have led with "what user outcome is currently broken, and does a runtime change fix it?"&lt;/p&gt;

&lt;p&gt;The worldview from my essay was seductive. The worldview from how the product I admired actually came to exist was more useful, and I walked past it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Changed
&lt;/h2&gt;

&lt;p&gt;After the conversation, I rewrote the plan.&lt;/p&gt;

&lt;p&gt;The four improvement tracks stayed. What changed was how they're classified and sequenced.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One track&lt;/strong&gt; — the one that directly changes what users feel (agent behavior shifts with user state) — became the center. It is the only track that gets to carry product positioning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three tracks&lt;/strong&gt; — the observability layer, the artifact bundle format, the evaluation feedback loop — got reclassified as &lt;em&gt;internal engineering investments.&lt;/em&gt; They might make my team's operations better. They don't show up in external messaging. No user cares about "we added trajectory-level scoring" as a product pitch.&lt;/p&gt;

&lt;p&gt;I also added a &lt;strong&gt;positioning confirmation gate.&lt;/strong&gt; Before any of this gets committed to as product positioning, the user-facing track has to be validated with actual users. If three out of five test users can point to specific moments where the agent felt like it was reading their state, positioning is confirmed. If they can't, the work was still useful internally, but the positioning claim doesn't get made.&lt;/p&gt;

&lt;p&gt;That last part feels uncomfortable, which is how I know it's right. The honest product position is "we think this direction matters but we don't know yet if users will feel it." That's how the position should sit until the evidence shows up.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Practical Test for Other Builder-Writers
&lt;/h2&gt;

&lt;p&gt;If you're both publishing and building, here's the test I now run before letting a piece of writing inform a product decision:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Would this observation survive contact with actual user interviews?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If your essay says users need X, and you haven't heard a user articulate X, that's a hypothesis, not a decision.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Is the elegance of the essay doing argumentative work the evidence isn't?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Coherent writing is a skill. Coherence inside the essay is not coherence with reality. Test whether the essay's strength is its logic or its evidence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. If you had to remove the essay from consideration, would the product decision still look the same?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If no, the essay is the load-bearing reason for the decision. That's dangerous unless the essay is backed by something beyond the author's own sense-making.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. What is your essay actually an output of — user research, or your own pattern-matching?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Both are valid writing inputs. Only one is a valid product input.&lt;/p&gt;

&lt;p&gt;None of these tests disqualify writing from influencing product direction. They just force you to name which track the essay belongs on.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Stays
&lt;/h2&gt;

&lt;p&gt;I'm going to keep writing these essays. The observation work is valuable on its own. It shapes how I think, it clarifies what I can articulate, it creates accountability that makes the thinking sharper. A weekly essay is not a time waste, even if it doesn't steer the roadmap.&lt;/p&gt;

&lt;p&gt;What I'm not going to do, anymore, is let the essay be the plan.&lt;/p&gt;

&lt;p&gt;The series I wrote last week — four posts on agent product primitives — is still a real series about real observations. Those observations probably point at real gaps. I just don't know yet which of them are &lt;em&gt;my users' gaps&lt;/em&gt; versus &lt;em&gt;the industry's abstract gaps&lt;/em&gt;. That distinction matters more than I treated it.&lt;/p&gt;

&lt;p&gt;If you're reading this as a builder-writer yourself, the thing I'd watch for is the fifth day of a week when the series feels especially coherent. That's the moment when the author in your head tries to hand the product direction to the builder in your head. The handoff feels productive. It's actually a step sideways into fiction.&lt;/p&gt;

&lt;p&gt;Notice when it happens. Make the split conscious.&lt;/p&gt;

</description>
      <category>productstrategy</category>
      <category>founderwriting</category>
      <category>contentstrategy</category>
      <category>positioning</category>
    </item>
    <item>
      <title>Response Quality Is Not Conversation Quality. A Paper Quantifies the Gap.</title>
      <dc:creator>고광웅</dc:creator>
      <pubDate>Tue, 21 Apr 2026 16:09:14 +0000</pubDate>
      <link>https://dev.to/ernham/response-quality-is-not-conversation-quality-a-paper-quantifies-the-gap-o4k</link>
      <guid>https://dev.to/ernham/response-quality-is-not-conversation-quality-a-paper-quantifies-the-gap-o4k</guid>
      <description>&lt;h2&gt;
  
  
  The Metric Most Agent Products Are Missing
&lt;/h2&gt;

&lt;p&gt;Most AI evaluation work you see on agent products measures the same thing: &lt;em&gt;was this response good?&lt;/em&gt; You get a score per output, you track it over time, you look for regressions. That's the pattern whether you're using LLM-as-judge, user thumbs-up/down, or hand-graded samples.&lt;/p&gt;

&lt;p&gt;This is a reasonable thing to measure. It's also an incomplete thing to measure, in a way that matters more for multi-turn agents than the industry has quite caught up to.&lt;/p&gt;

&lt;p&gt;Here's what we're missing: a conversation can be full of individually good responses and still be &lt;em&gt;structurally&lt;/em&gt; broken. The agent contradicts itself across turns. It shifts topic in ways the user didn't cue. It answers the current message fine but has stopped tracking what was agreed three messages ago. Each response passes a quality bar. The conversation fails anyway.&lt;/p&gt;

&lt;p&gt;A paper uploaded to arXiv last week tries to formalize this gap — and more interestingly, proposes a way to measure it in production without embeddings, judges, or access to model internals. I want to walk through what it shows, because I think the measurement framing is more important than the specific method.&lt;/p&gt;

&lt;p&gt;The paper is &lt;a href="https://arxiv.org/abs/2604.13061" rel="noopener noreferrer"&gt;Token Statistics Reveal Conversational Drift in Multi-turn LLM Interaction&lt;/a&gt; (Hafez, Nazeri, v2 April 17).&lt;/p&gt;

&lt;h2&gt;
  
  
  The Number That Made Me Read It Twice
&lt;/h2&gt;

&lt;p&gt;Across 4,574 conversational turns spanning 34 conditions, three frontier teacher models and one student model, the authors report:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Their proposed signal aligns with &lt;strong&gt;structural consistency&lt;/strong&gt; in &lt;strong&gt;85%&lt;/strong&gt; of conditions.&lt;/li&gt;
&lt;li&gt;It aligns with &lt;strong&gt;semantic quality&lt;/strong&gt; in only &lt;strong&gt;44%&lt;/strong&gt; of conditions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Put those two numbers next to each other and they tell a story.&lt;/p&gt;

&lt;p&gt;Response quality and conversational consistency are not the same thing. They can be measured to diverge. And the tools most teams use — LLM-as-judge on outputs, user feedback on individual responses — are measuring the 44% side of the gap, not the 85% side.&lt;/p&gt;

&lt;p&gt;If your agent is deployed in anything that looks like an ongoing interaction — support, coaching, tutoring, sales, therapy-adjacent use cases, long-form research, gaming — the side you're not measuring is the side where the trust breaks.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Paper Actually Proposes
&lt;/h2&gt;

&lt;p&gt;The authors define a metric they call &lt;strong&gt;Bipredictability (P)&lt;/strong&gt;. Their description of it, taken from the abstract: it "measures shared predictability across the context, response, next prompt loop relative to the turn total uncertainty."&lt;/p&gt;

&lt;p&gt;In plain terms: across a conversation turn, there's information that the &lt;em&gt;context&lt;/em&gt; predicts about the &lt;em&gt;response&lt;/em&gt;, information that the &lt;em&gt;response&lt;/em&gt; predicts about the &lt;em&gt;next prompt&lt;/em&gt;, and information that the &lt;em&gt;next prompt&lt;/em&gt; predicts about the &lt;em&gt;context&lt;/em&gt;. How much those loops overlap, relative to how uncertain each turn is overall, is what they track.&lt;/p&gt;

&lt;p&gt;The implementation is a lightweight auxiliary component they call the &lt;strong&gt;Information Digital Twin (IDT)&lt;/strong&gt; — running alongside the agent and computing P from the token stream. No embeddings. No auxiliary evaluator model. No white-box model access.&lt;/p&gt;

&lt;p&gt;That engineering profile matters. It means the signal can, in principle, sit in a production deployment at trivial cost. Most "measure what's happening in your LLM agent" proposals involve an LLM judge or a vector DB query per turn. This one is token frequency statistics.&lt;/p&gt;

&lt;p&gt;I haven't built their system, and the abstract doesn't go deep on implementation details, so I'm hedging on whether the engineering will be as clean in practice as the abstract implies. But the design choice is pointing at something real: if you want a monitoring signal that can run continuously in production, it has to be cheap enough to not change your deployment economics. Bipredictability, at least as described, fits that constraint.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Their IDT Caught
&lt;/h2&gt;

&lt;p&gt;The detection result reported in the abstract: &lt;strong&gt;100% sensitivity&lt;/strong&gt; for contradictions, topic shifts, and non-sequiturs in their tested set.&lt;/p&gt;

&lt;p&gt;Sensitivity claims at that level always deserve scrutiny — it means every tested failure was caught, not that every failure in every real deployment will be. The authors are testing against constructed conversations with known failures. Production distributions will be messier.&lt;/p&gt;

&lt;p&gt;Still, even accepting the number at face value for the test conditions, three failure types are worth naming because they map directly to real user complaints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Contradictions&lt;/strong&gt; — agent says A in turn 3, says not-A in turn 12, and the user has been quietly losing faith since turn 7.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Topic shifts&lt;/strong&gt; — agent pivots away from the user's thread without a cue. Feels "off" in a way users rarely articulate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Non-sequiturs&lt;/strong&gt; — response that's individually coherent but doesn't actually engage with what just happened.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you've ever had a user say "I don't know, it just stopped feeling right" — they're usually describing one of these three. None of them are caught by "rate this response 1-5" dashboards.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Is a Different Measurement Problem Than Response Quality
&lt;/h2&gt;

&lt;p&gt;I think the piece builders most often miss is that &lt;strong&gt;response-level evaluation and conversation-level evaluation are structurally different problems&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Response quality is a pointwise judgment. You can sample, score, aggregate. LLM-as-judge does a decent job of this. It's the kind of evaluation that fits neatly into existing observability tooling — each output is a discrete event with a score attached.&lt;/p&gt;

&lt;p&gt;Conversation-level consistency is a sequence problem. You can't score it by looking at any single turn. You need to look at relationships between turns. The measurement surface is the conversation trajectory, not individual messages.&lt;/p&gt;

&lt;p&gt;The tools haven't caught up. Agent observability platforms like Langfuse, LangSmith, Helicone are doing better-than-ever work on per-call metrics — latency, cost, tool usage, response sampling. Very little in that category instruments conversation-level structural properties. Which is the level where multi-turn agents mostly fail.&lt;/p&gt;

&lt;p&gt;The paper's contribution, from a tools-thinking perspective, is identifying that there's a cheap signal at this level if you know where to look.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three Audit Questions For Your Multi-Turn Agents
&lt;/h2&gt;

&lt;p&gt;If you're shipping any kind of multi-turn agent, three questions are worth sitting with:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Do you measure anything about the conversation as a whole, or only about individual turns?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most teams I know answer "only individual turns" after thinking about it. The shape of current dashboards enforces this — each row is a request.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. If a user tells you "the conversation got weird around message 15," can you go find what happened?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most production agents don't retain full conversation state in a way that makes this analyzable after the fact. Or they retain it, but nothing about the trajectory is indexed or searchable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Have you instrumented topic shifts, contradictions, or non-sequiturs in any form?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you haven't, you're outsourcing detection of these failures to your users. They'll notice, but you won't — and by the time they tell you, attrition has happened.&lt;/p&gt;

&lt;p&gt;These aren't theoretical failures. They're the most common complaint pattern in post-cancellation interviews I've seen for multi-turn AI products: "It worked fine at first but then kind of drifted."&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Paper Leaves Open
&lt;/h2&gt;

&lt;p&gt;A few things the abstract doesn't settle that I'd want to know before building on it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Exactly how Bipredictability is computed.&lt;/strong&gt; The verbal description is suggestive but not precise enough to reimplement from. Need to read the full paper.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Which frontier models were used.&lt;/strong&gt; The abstract says "three frontier teacher models" without naming them. Worth checking whether the signal transfers across model families.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Whether code or data is public.&lt;/strong&gt; The arXiv page doesn't list code or dataset resources. For a proposal that's essentially "add this runtime monitor to your system," reference implementation availability will determine how fast this gets adopted.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;False positive behavior at production scale.&lt;/strong&gt; 100% sensitivity on a curated test set is a different claim from "works reliably at scale without flooding you with false flags." The abstract doesn't report specificity in a form I can quote.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm flagging these not to dismiss the paper but because the distance between "this is the right idea" and "this is deployable" is where most interesting research lives, and it's worth staying honest about that distance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Builder Takeaway
&lt;/h2&gt;

&lt;p&gt;The specific metric matters less to me than the framing. The framing is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Response quality is a &lt;em&gt;property of individual outputs&lt;/em&gt;. Conversational reliability is a &lt;em&gt;property of the trajectory&lt;/em&gt;. If you only measure the first, you're blind to failure modes that happen at the second level — and those are the failure modes that drive user churn in multi-turn products.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Whatever the eventual best implementation turns out to be — Bipredictability, embeddings-based, something else — the thing worth internalizing is that there's a measurement gap here, and closing it probably requires rethinking what your agent observability stack is watching.&lt;/p&gt;

&lt;p&gt;For me, the immediate action from reading this isn't "implement IDT." It's closer to: audit what dashboards my team and I are actually looking at, and note how many of them measure conversation-level properties at all. The answer for most of us is going to be close to zero. That's the gap worth working on before worrying about which specific metric to adopt.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Throughline
&lt;/h2&gt;

&lt;p&gt;I've been writing this week about AI agent primitives — persona that's actually runtime steering rather than a static string, design that's content-structure-first, and now measurement that's trajectory-level rather than pointwise.&lt;/p&gt;

&lt;p&gt;There's a pattern connecting them. The abstractions we shipped first for AI agents were the ones that were easy to build: persona as string, design as template, evaluation as per-output score. In each case, the more accurate primitive is a little harder and a little more runtime-y: persona as steering, design as content-reading, evaluation as trajectory-watching.&lt;/p&gt;

&lt;p&gt;That's not a coincidence. Early AI product design has been constrained by what was cheap and easy to instrument at the call site. What I'm watching the research space do, right now, is build the tooling that lets the harder and more accurate primitives become cheap and easy too. When they do, the products that got shipped on the easier abstractions will look more brittle than they currently do.&lt;/p&gt;

&lt;p&gt;If you're building, the question worth asking isn't just "what do I ship now?" It's also "which of my current primitives is an early-days hack that I'll want to replace when better measurement lands?" For multi-turn agents, my guess is that evaluation is one of those — and this paper is a pointer toward where the replacement starts.&lt;/p&gt;

</description>
      <category>aiagents</category>
      <category>llmevaluation</category>
      <category>observability</category>
      <category>multiturn</category>
    </item>
    <item>
      <title>Your AI's Persona Is a String. A New Paper Argues It Should Be a Steering Vector.</title>
      <dc:creator>고광웅</dc:creator>
      <pubDate>Sun, 19 Apr 2026 14:08:10 +0000</pubDate>
      <link>https://dev.to/ernham/your-ais-persona-is-a-string-a-new-paper-argues-it-should-be-a-steering-vector-4bha</link>
      <guid>https://dev.to/ernham/your-ais-persona-is-a-string-a-new-paper-argues-it-should-be-a-steering-vector-4bha</guid>
      <description>&lt;h2&gt;
  
  
  The Mismatch Most Persona Products Live With
&lt;/h2&gt;

&lt;p&gt;If you've built any kind of AI agent product in the last two years, you've probably shipped a "persona" feature. Usually it looks like this: a text field where the user (or the product) writes "You are a witty, slightly sarcastic assistant who loves climbing," and that string gets stitched into a system prompt. Done. Persona complete.&lt;/p&gt;

&lt;p&gt;The thing is, nobody who has ever worked with real people thinks of personality that way. Actual humans don't have a single mode. The friendly coworker is different at 2am on a deadline. The patient teacher is different when a student is being deliberately obtuse. Situation changes behavior, and most of the time it changes it a lot.&lt;/p&gt;

&lt;p&gt;A paper that went up on arXiv this week formalizes that mismatch and proposes something interesting about how to fix it. It's not the kind of paper that'll get quoted in keynote slides — there are no dramatic benchmarks in the abstract — but the conceptual move is, I think, more important than the specific method.&lt;/p&gt;

&lt;p&gt;The paper is &lt;a href="https://arxiv.org/abs/2604.13846" rel="noopener noreferrer"&gt;Beyond Static Personas: Situational Personality Steering for Large Language Models&lt;/a&gt; (Wei, Li, Wang, Deng, April 15). Short version: instead of treating personality as a string you define once, treat it as a runtime steering signal over the model's neurons — one that shifts with the situation.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Paper Actually Argues
&lt;/h2&gt;

&lt;p&gt;The technical contribution is a framework the authors call &lt;strong&gt;IRIS&lt;/strong&gt; — &lt;em&gt;Identify, Retrieve, Steer&lt;/em&gt;. It's training-free and operates at the neuron level. Three parts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Situational persona neuron identification&lt;/strong&gt; — find the specific neurons whose activation patterns correspond to personality traits in context.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Situation-aware neuron retrieval&lt;/strong&gt; — given a new situation, retrieve the relevant neuron set for the desired persona expression under that situation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Similarity-weighted steering&lt;/strong&gt; — apply a steering vector to those neurons at inference time, weighted by how similar the current situation is to the retrieved references.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What I find more interesting than the method is the empirical claim underneath it: the authors argue (and their analysis attempts to demonstrate) that situation-dependency and situation-behavior patterns already exist inside LLM personalities, at the neuron level. Personality isn't just an artifact of the system prompt — it's something the model has internalized structurally, and that structure is responsive to context.&lt;/p&gt;

&lt;p&gt;If that holds up under replication, the implication is bigger than IRIS itself. It means the right abstraction for "persona" in an LLM might not be &lt;em&gt;a description you write&lt;/em&gt; but &lt;em&gt;a manifold you steer&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I'm hedging because the abstract doesn't give specific win margins and I haven't dug into the full paper. The method could under-perform cleaner approaches in practice. But the framing is worth thinking about regardless of whether IRIS turns out to be the method that wins.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Is a Design Problem, Not Just a Method Problem
&lt;/h2&gt;

&lt;p&gt;Here's the thing I keep coming back to. Most of the persona code I've written — and most of what I see shipped in agent products — treats persona as a &lt;strong&gt;compile-time primitive&lt;/strong&gt;. You write it once, it goes into the system prompt, and from that point forward the agent's "character" is whatever that text produces in combination with whatever comes after.&lt;/p&gt;

&lt;p&gt;What this paper is pointing at is that persona is arguably a &lt;strong&gt;runtime primitive&lt;/strong&gt;. It's not a fixed definition. It's a behavior modulation that should respond to context — and the model already has the internal machinery to do that if you know where to apply the signal.&lt;/p&gt;

&lt;p&gt;Those are two different things, and I don't think the industry has fully reckoned with the difference. We're selling "custom AI personas" while implementing static strings. The user-facing story is "you can make this agent sarcastic" but the implementation is a shim that barely survives contact with an adversarial user.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Game Designers Have Been Saying For Decades
&lt;/h2&gt;

&lt;p&gt;I spent a decade designing games before I started building AI agents. The paper's framing feels very familiar to me — it's arriving, through a different path, at something the game AI community has treated as common knowledge for a long time.&lt;/p&gt;

&lt;p&gt;Static NPC personalities get old in a session. A guard who always says the same thing in the same tone at the same time regardless of what the player has been doing is immediately legible as a set piece, not a character. The guards players remember are the ones that modulated — the ones whose threat level shifted with how many times you'd returned to the same area, the ones whose dialogue tree branched based on tension state.&lt;/p&gt;

&lt;p&gt;The vocabulary was different. We didn't say "steering vectors." We said mood systems, faction relationships, dynamic difficulty, dialog branching by tension. But the underlying insight is the same: behavior is a function of state × situation × character, not just character.&lt;/p&gt;

&lt;p&gt;The novelty of a paper like IRIS, from a game designer's lens, isn't the idea. It's the discovery that the scaffolding for this kind of behavior is already latent in LLM weights and can be activated without retraining. That part is genuinely new.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three Questions to Ask About Your Own Persona Implementation
&lt;/h2&gt;

&lt;p&gt;If you ship a product where users can define or tune an AI's personality, it's worth auditing what you actually built against what you probably told users you built. Some specific questions:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. What happens to your persona when the user asks something hostile?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Static-string personas tend to collapse under adversarial pressure. The "patient teacher" prompt starts talking like a base model the moment someone pushes hard. If your persona is a product promise, you need a mechanism beyond a string — otherwise the promise is broken the first time someone tests it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Does your persona change register with conversation length?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Real teachers get firmer as a session drags on. Real assistants get more efficient as trust is established. If your agent sounds the same in message 1 and message 40, you've got a behavior rigidity that will eventually feel wrong to users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. What does your persona do when the topic shifts to something the "character" wouldn't know about?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the case where static personas fail most visibly. A persona designed around "warm emotional support" doesn't gracefully handle a user suddenly asking for tax advice. A situational model would know to shift register without dropping character. A string-based model can only either stay in character and refuse, or break character and help. Neither is right.&lt;/p&gt;

&lt;p&gt;These aren't theoretical. They're the three places where persona products routinely fail in ways that erode user trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Part That Matters for Builders
&lt;/h2&gt;

&lt;p&gt;I don't think the takeaway from this paper is that everyone should rewrite their agents to do neuron-level steering next week. The infrastructure to do that at production scale doesn't really exist outside research labs yet.&lt;/p&gt;

&lt;p&gt;The takeaway is more structural. The "persona" primitive most of us are using is probably a UI convenience over a more correct runtime mechanism. The more correct mechanism isn't accessible yet, but the mismatch is worth being honest about in how we design around persona features today.&lt;/p&gt;

&lt;p&gt;Some implications I'm thinking through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Treat persona as a layered system rather than a single string.&lt;/strong&gt; Core traits at one level, situational modifiers at another, tone adjustments at a third. This is messier in the UX but closer to what's actually happening.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build instrumentation for persona drift.&lt;/strong&gt; How does your agent's tone change across a long conversation? Across different user emotional states? You probably don't measure this and should.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Be wary of "custom persona" as a feature promise.&lt;/strong&gt; If your implementation is a text field and the model is doing the rest, you're selling something the mechanism can't reliably deliver. Setting user expectations honestly is better than overselling.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What This Paper Doesn't Settle
&lt;/h2&gt;

&lt;p&gt;I want to name a few things the paper, as I understand it from the abstract, does not resolve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The specific benchmarks (PersonalityBench and the authors' new SPBench) aren't standard in the field yet.&lt;/strong&gt; Situational personality benchmarks are hard to construct well, and it's possible a different benchmark would tell a different story.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Training-free methods are appealing for deployment but sometimes undersell what you'd get from even a small amount of targeted fine-tuning.&lt;/strong&gt; IRIS may be the right research contribution but not the right engineering choice for a given product.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Neuron-level steering is interpretability-adjacent territory, and that field has been notably humble about what its findings mean.&lt;/strong&gt; Identifying "persona neurons" is a strong claim that deserves scrutiny before anyone builds on it as foundational.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm flagging these not to pick fights with the paper but because conceptual takeaways are more portable than methodological ones, and conflating them is how builders end up chasing implementations that don't actually help their products.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Close
&lt;/h2&gt;

&lt;p&gt;What I'm sitting with, after reading this paper alongside the last few days of working on agent products, is that a lot of the primitives we use are shaped by what was easy to build rather than what is actually the right model of the thing we're building.&lt;/p&gt;

&lt;p&gt;Persona-as-string is easy. Persona-as-neural-steering-signal is hard. So we shipped the easy one. That's fair — you ship what works today. But it's worth occasionally asking whether the abstraction you shipped is actually the right abstraction, or just the one that was available.&lt;/p&gt;

&lt;p&gt;For persona specifically, my current guess is that the right abstraction is situational and runtime, not descriptive and static. The paper arrives at that conclusion through empirical analysis of neuron activations. Game designers arrived there through twenty years of making NPCs that didn't suck. Different paths, convergent answer.&lt;/p&gt;

&lt;p&gt;Whether IRIS is the specific mechanism that ends up winning is almost beside the point. What matters is the reframe: &lt;strong&gt;behavior is a function of situation, and persona is a steering problem, not a description problem.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you're building in this space, it's worth checking which one your product actually implements.&lt;/p&gt;

</description>
      <category>aiagents</category>
      <category>llm</category>
      <category>persona</category>
      <category>iris</category>
    </item>
    <item>
      <title>Claude Design Looks Like AI Magic. Reading the Source, It's Four Engineering Patterns.</title>
      <dc:creator>고광웅</dc:creator>
      <pubDate>Sun, 19 Apr 2026 04:34:45 +0000</pubDate>
      <link>https://dev.to/ernham/claude-design-looks-like-ai-magic-reading-the-source-its-four-engineering-patterns-3p9m</link>
      <guid>https://dev.to/ernham/claude-design-looks-like-ai-magic-reading-the-source-its-four-engineering-patterns-3p9m</guid>
      <description>&lt;h2&gt;
  
  
  Before the Hype Settles
&lt;/h2&gt;

&lt;p&gt;Anthropic shipped Claude Design on April 17, and most of the discussion has framed it as a Figma-challenging AI design tool. I used it differently. Instead of treating it as a design tool, I treated it as a specimen — generated a handful of skill bundles, exported the output, then spent more time reading the source than tweaking the design. I was more interested in &lt;em&gt;how the product works&lt;/em&gt; than in &lt;em&gt;what it can design&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;What I found is more interesting than "AI can design now." The product appears to be mostly four discrete engineering patterns that happen to have a model at the entry point. The model doesn't feel like the magic — it's writing into a carefully structured runtime that almost any team could build.&lt;/p&gt;

&lt;p&gt;This post walks through those four patterns, what they actually do, and what I'm taking into my own stack. Caveat up front: these are observations from a source read of a small number of bundles, not a rigorous evaluation of the product's full behavior. I'll flag assumptions as I go.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Inspected
&lt;/h2&gt;

&lt;p&gt;A Claude Design skill bundle we looked at contained:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Wireframes.html&lt;/code&gt; — a 72KB single-file wireframe document with five navigation variations across three screens each, plus a live Tweaks engine.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;IR Deck - Hi-fi.html&lt;/code&gt; and &lt;code&gt;IR Deck - Wireframes.html&lt;/code&gt; — 1920×1080 slide decks wrapped in a custom Web Component.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;deck-stage.js&lt;/code&gt; — a 621-line Web Component that provides the slide runtime.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;colors_and_type.css&lt;/code&gt; — a 160-line design token sheet organized into seven categories.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;SKILL.md&lt;/code&gt; — a 20-line skill manifest with frontmatter.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;README.md&lt;/code&gt; — a 223-line brand and voice guide.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;preview/&lt;/code&gt; — twelve single-file "at-a-glance" cards, one per token category.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ui_kits/web/&lt;/code&gt; — a React 19 UMD clickable prototype.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The total footprint is small. What's striking is how the pieces fit together — and how few moving parts there actually are.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 1 — Tweaks: &lt;code&gt;data-*&lt;/code&gt; Attributes with CSS Variables
&lt;/h2&gt;

&lt;p&gt;The Tweaks panel is what makes the output feel interactive: click "dusk" and the whole design shifts to a warm dark palette. Click "compact" and the layout tightens. No regeneration. No API round-trip.&lt;/p&gt;

&lt;p&gt;The mechanism is mundane. Every theme, accent, layout, and density option is a &lt;code&gt;:root&lt;/code&gt; CSS variable override keyed to a &lt;code&gt;data-*&lt;/code&gt; attribute on the root element:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight css"&gt;&lt;code&gt;&lt;span class="nd"&gt;:root&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;data-theme&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;"paper"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;--ink&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="m"&gt;#1a1a1a&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="py"&gt;--paper&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="m"&gt;#f4efe6&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="py"&gt;--accent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="m"&gt;#c53b1e&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;data-theme&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;"dusk"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;         &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;--ink&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="m"&gt;#eae3d2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="py"&gt;--paper&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="m"&gt;#2a2620&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="py"&gt;--accent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="m"&gt;#e77c5f&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;data-theme&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;"midnight"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;     &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;--ink&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="m"&gt;#f0ebde&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="py"&gt;--paper&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="m"&gt;#14110d&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="py"&gt;--accent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="m"&gt;#ff8a6a&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;data-accent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;"gold"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;        &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;--accent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="m"&gt;#d4a017&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;data-layout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;"stack"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="nc"&gt;.flow&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;grid-template-columns&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="n"&gt;fr&lt;/span&gt; &lt;span class="cp"&gt;!important&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;data-density&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;"compact"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="nt"&gt;main&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;14px&lt;/span&gt; &lt;span class="m"&gt;18px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A click handler sets &lt;code&gt;document.documentElement.dataset.theme = "dusk"&lt;/code&gt;, persists to &lt;code&gt;localStorage&lt;/code&gt;, and &lt;code&gt;postMessage&lt;/code&gt;s the host window so it can save the selection against the artifact. That's the entire switching layer.&lt;/p&gt;

&lt;p&gt;Four axes, three-to-four options each, roughly 144 combinations available without regenerating anything. The design-system work is done at token definition time, not at runtime.&lt;/p&gt;

&lt;p&gt;The takeaway I'm sitting with: what feels like "AI variant generation" in interactive design tools may be mostly static CSS token switching. The AI wrote the tokens once. The switching is attribute swapping.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 2 — &lt;code&gt;deck-stage.js&lt;/code&gt;: A 621-Line Web Component That Replaces a Slide Tool
&lt;/h2&gt;

&lt;p&gt;The decks in the output aren't Reveal.js or a bespoke React app. They're a custom Web Component, &lt;code&gt;&amp;lt;deck-stage&amp;gt;&lt;/code&gt;, containing plain &lt;code&gt;&amp;lt;section&amp;gt;&lt;/code&gt; children as slides.&lt;/p&gt;

&lt;p&gt;What the component does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fits a design-size canvas (default 1920×1080) to whatever viewport it's rendered in, using &lt;code&gt;transform: scale()&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Handles keyboard navigation (arrows, PgUp/PgDn, Space, Home, End, 0–9, R).&lt;/li&gt;
&lt;li&gt;Adds mobile tap zones (left third / right third).&lt;/li&gt;
&lt;li&gt;Persists the current slide to &lt;code&gt;localStorage&lt;/code&gt;, keyed by document path, so a refresh restores position.&lt;/li&gt;
&lt;li&gt;Renders a floating overlay with previous/next/reset controls and a slide counter.&lt;/li&gt;
&lt;li&gt;Injects a &lt;code&gt;&amp;lt;style&amp;gt;&lt;/code&gt; tag into &lt;code&gt;document.head&lt;/code&gt; with &lt;code&gt;@page { size: 1920px 1080px; margin: 0; }&lt;/code&gt; so the browser's native "Print to PDF" produces one page per slide.&lt;/li&gt;
&lt;li&gt;Emits a &lt;code&gt;slidechange&lt;/code&gt; CustomEvent with &lt;code&gt;bubbles: true, composed: true&lt;/code&gt; and a &lt;code&gt;reason&lt;/code&gt; field (&lt;code&gt;init/keyboard/click/tap/api&lt;/code&gt;) — listenable cleanly from outside the shadow DOM.&lt;/li&gt;
&lt;li&gt;Reads a &lt;code&gt;&amp;lt;script type="application/json"&amp;gt;&lt;/code&gt; block of speaker notes and posts them to the host window.&lt;/li&gt;
&lt;li&gt;Honors a &lt;code&gt;noscale&lt;/code&gt; attribute for PPTX export cases where the CSS transform is undesirable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Two implementation details stood out:&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;@page&lt;/code&gt; rule has to be injected into the outer document because shadow DOM ignores &lt;code&gt;@page&lt;/code&gt;. So the component walks up and writes into &lt;code&gt;document.head&lt;/code&gt; during &lt;code&gt;connectedCallback&lt;/code&gt;. This is the kind of detail that gets no documentation credit but separates "works in print" from "falls apart at export time."&lt;/p&gt;

&lt;p&gt;Slides are hidden, not unmounted, with &lt;code&gt;visibility: hidden; opacity: 0&lt;/code&gt;. That preserves the state of videos, iframes, form inputs, and React subtrees across navigation. If you're building a slide system in React with conditional rendering, you're quietly discarding state every time the user hits the arrow key. A cheap fix with meaningful UX consequences.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 3 — &lt;code&gt;SKILL.md&lt;/code&gt;: A Manifest Format, Not a System Prompt
&lt;/h2&gt;

&lt;p&gt;The skill manifest is smaller than I expected. Three frontmatter fields:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;skill-kebab-case&amp;gt;&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Use&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;this&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;skill&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;generate&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;well-branded&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;interfaces&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;and&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;assets&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;lt;domain&amp;gt;.&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Contains&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;lt;key&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;files&amp;gt;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;lt;style&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;context&amp;gt;."&lt;/span&gt;
&lt;span class="na"&gt;user-invocable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The body reads like a protocol, not a persona:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Read &lt;code&gt;README.md&lt;/code&gt; within this skill first."&lt;/li&gt;
&lt;li&gt;"Then look at &lt;code&gt;colors_and_type.css&lt;/code&gt;."&lt;/li&gt;
&lt;li&gt;"If creating visual artifacts (slides, mocks): copy assets out, produce static HTML."&lt;/li&gt;
&lt;li&gt;"If working on production code: treat &lt;code&gt;&amp;lt;path&amp;gt;/frontend/app/&lt;/code&gt; as canonical."&lt;/li&gt;
&lt;li&gt;"If invoked with no other guidance: ask 3–5 questions about scope and audience, then act as an expert designer."&lt;/li&gt;
&lt;li&gt;"Always flag: font substitutions, chart color choices, and any deviation from the documented color contract."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Three things about this format stood out:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Description is the routing signal.&lt;/strong&gt; The orchestrator decides &lt;em&gt;when&lt;/em&gt; to invoke the skill by reading the description alone. So the description has to encode domain, output type, and stylistic signal in one paragraph — different from how most agent frameworks define a role.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The body is a branched protocol.&lt;/strong&gt; "If A, do X. If B, do Y." Not a soft persona, not a goal statement. Concrete execution paths keyed to invocation context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Always flag" is mandatory self-reporting at the manifest level.&lt;/strong&gt; Fonts were substituted? Flag it. Deviated from the color contract? Flag it. It's an anti-hallucination pattern written into the skill definition rather than left to the model to remember.&lt;/p&gt;

&lt;p&gt;I don't think the manifest format itself is novel — it's structurally close to how Claude Code's existing SKILL.md works. But its use as an agent interface for a consumer-facing design product is a concrete shape I haven't seen written down this cleanly before.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 4 — Output = Self-Contained HTML Bundle
&lt;/h2&gt;

&lt;p&gt;The artifact isn't stored in a proprietary database. It's a folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;IR/
├── assets/
│   └── colors_and_type.css
├── IR Deck - Hi-fi.html
└── deck-stage.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The HTML references the CSS by relative path and the JS by relative path. Everything is physically co-located.&lt;/p&gt;

&lt;p&gt;Zip it, upload to any static host, it works. No build step. No framework runtime. No server-side rendering required.&lt;/p&gt;

&lt;p&gt;There's a small interesting trick: the same &lt;code&gt;colors_and_type.css&lt;/code&gt; file appears as copies in multiple subfolders — one for the deck, one for the UI kit, one for the preview cards. The bundle is optimized for survival, not deduplication. If a user downloads just the deck folder, they don't lose styling.&lt;/p&gt;

&lt;p&gt;More bytes, no broken links. For a consumer product where users will definitely cut-and-paste the wrong subset of files, that tradeoff probably earns itself back quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Shape Is Interesting
&lt;/h2&gt;

&lt;p&gt;Going in, my mental model was roughly: "Claude Design is a big AI system that generates design output."&lt;/p&gt;

&lt;p&gt;What I came away with is closer to: "It looks like a thin AI orchestration layer over a carefully engineered runtime and manifest format, and the interesting work is in the runtime."&lt;/p&gt;

&lt;p&gt;The model writes HTML, CSS, and SKILL.md files into this system. The system is what makes the output interactive, exportable, and robust across environments. If Anthropic swapped the model tomorrow for a comparable one, my guess is the user experience would barely change — because the experience is mostly the runtime.&lt;/p&gt;

&lt;p&gt;That reframes the build-vs-buy question for anyone working in this space. You may not need a design-specialized model to get most of the user-facing value. What seems to matter more is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A token CSS file with tight, opinionated choices.&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;data-*&lt;/code&gt; attribute theming layer.&lt;/li&gt;
&lt;li&gt;A Web Component (or equivalent) that handles presentation concerns: scale, navigation, print.&lt;/li&gt;
&lt;li&gt;A manifest format for the skills that generate into the runtime.&lt;/li&gt;
&lt;li&gt;A bundle format that survives being zipped and sent around.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Build that scaffold, then any capable LLM plausibly becomes your design engine. If this read is right, the hard work isn't the AI — it's the runtime the AI writes into. I'm hedging because one bundle inspection isn't enough to generalize.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm Taking
&lt;/h2&gt;

&lt;p&gt;Four patterns I'm adapting into our own stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Four-axis Tweaks (theme / accent / layout / density).&lt;/strong&gt; Roughly fifty lines of CSS and JavaScript for a meaningful UX upgrade. Low risk, high visibility.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;@page&lt;/code&gt; dynamic injection for PDF export.&lt;/strong&gt; Potentially removes the need for a separate PDF library in slide-style outputs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SKILL.md manifest format for our agents.&lt;/strong&gt; Three-field frontmatter, branched body, mandatory "Always flag" section. Structural improvement on how we currently define agent behavior.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-contained HTML bundles as the default artifact.&lt;/strong&gt; No server dependency, zippable, survives cut-and-paste. Lowers the support surface dramatically for client-facing deliverables.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm leaving aside porting the slide Web Component for now, because we already have a working runtime and the license review cost isn't obviously worth the marginal gain. The patterns above are portable with a day or two of work each.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Open Question
&lt;/h2&gt;

&lt;p&gt;The specific shape I'm describing — four patterns around a model — isn't obviously unique to design. It's roughly the shape of a lot of AI-labeled products right now. The product does something useful and the model is the visible new thing, but most of what makes it work seems to be conventional engineering inside a well-thought-out structure.&lt;/p&gt;

&lt;p&gt;If that holds, a lot of "AI products" are really platform products where the AI is the entry point rather than the engine. The scarce skill in that world isn't prompting. It's designing the runtime the prompts write into.&lt;/p&gt;

&lt;p&gt;I'm not sure whether that's a durable observation or a snapshot of where we are in this early phase of AI product maturity. But it's the one I came away from this source read with, and it's shifted how I'm thinking about our own architecture.&lt;/p&gt;

&lt;p&gt;If you're building in this space, reading the bundles your tools produce might teach you more than reading the marketing. That's the part I'd put the most confidence on.&lt;/p&gt;

</description>
      <category>claudedesign</category>
      <category>anthropic</category>
      <category>engineering</category>
      <category>webcomponents</category>
    </item>
  </channel>
</rss>
