<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Julian Oczkowski</title>
    <description>The latest articles on DEV Community by Julian Oczkowski (@aiforwork).</description>
    <link>https://dev.to/aiforwork</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aiforwork"/>
    <language>en</language>
    <item>
      <title>Vibe Design: Why Claude Design Just Redefined the Category</title>
      <dc:creator>Julian Oczkowski</dc:creator>
      <pubDate>Fri, 17 Apr 2026 20:20:46 +0000</pubDate>
      <link>https://dev.to/aiforwork/vibe-design-why-claude-design-just-redefined-the-category-3nh7</link>
      <guid>https://dev.to/aiforwork/vibe-design-why-claude-design-just-redefined-the-category-3nh7</guid>
      <description>&lt;p&gt;&lt;em&gt;After 29 years as a designer, I tested Claude Design. The category just shifted, and most designers haven't noticed yet.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;I have been a designer for 29 years. I have shipped work for Adobe, IBM, Danone, and a long list of companies in between. I have lived through Fireworks, Freehand, Sketch, Figma, and every wave of tool consolidation in between.&lt;/p&gt;

&lt;p&gt;Today, sitting in front of Claude Design for the first time, I caught myself saying something I have never said about a design tool before.&lt;/p&gt;

&lt;p&gt;"This is a better way of working."&lt;/p&gt;

&lt;p&gt;Not "this is interesting." Not "this might replace X." A better way of working. Full stop.&lt;/p&gt;

&lt;p&gt;The tool I was testing is Claude Design, announced this morning by Anthropic Labs. Powered by Opus 4.7. Research preview. claude.ai/design.&lt;/p&gt;

&lt;p&gt;But the tool is not what this article is about. The tool is a proof point. What matters is the category it just defined.&lt;/p&gt;

&lt;p&gt;I am calling it vibe design.&lt;/p&gt;




&lt;h2&gt;
  
  
  What vibe design actually is
&lt;/h2&gt;

&lt;p&gt;The phrase vibe coding, coined by Andrej Karpathy, describes a mode of software development where a human expresses intent in natural language and an AI produces, revises, and ships the code. You stop typing the implementation. You start steering it.&lt;/p&gt;

&lt;p&gt;Vibe design is the same shift, one layer up.&lt;/p&gt;

&lt;p&gt;It is design produced by expressing intent in natural language, refined through live parameters and direct conversation, and handed off to code without ever producing a static design file. No .fig. No artboards. No developer translating pixels back into semantic markup. The design is never a deliverable on its own. It is always one step in the conversation between intent and shipped product.&lt;/p&gt;

&lt;p&gt;Three properties define it:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Natural language as the primary input.&lt;/strong&gt; You describe what you want, the tool produces a starting version, you refine through conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Live parameters as the primary refinement surface.&lt;/strong&gt; Instead of nudging pixels, you adjust sliders, toggle variants, and let the tool regenerate. Typography, spacing, color, layout, shape. All live. All reversible. All parametric.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Direct handoff to code, not to developers.&lt;/strong&gt; The output of vibe design is not a file for a developer to rebuild. It is a structured description of intent that the code layer consumes directly. In Claude Design's case, that means a handoff bundle that Claude Code reads and builds into working software.&lt;/p&gt;

&lt;p&gt;If you have used v0 or Lovable, you have done a version of this. But those tools collapse design and code into a single step. Vibe design separates them again, deliberately, so designers can iterate on intent before the code layer takes over.&lt;/p&gt;

&lt;p&gt;That is the new thing. That is what Claude Design just made legible.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why this is different from the AI design tools you've already seen
&lt;/h2&gt;

&lt;p&gt;For the last two years, AI design tools have lived on a spectrum from "AI in your design tool" to "AI that writes your frontend code." I have tested most of them on my YouTube channel. Here is where each one sits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figma Make and Figma Agent.&lt;/strong&gt; AI inside Figma. You still produce a .fig file. The AI accelerates the production of that file. The output is still a deliverable that a developer translates into code. Incremental, not categorical.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Stitch.&lt;/strong&gt; Mobile-first, prompt-to-visual. Beautiful output, narrow scope. A speed tool, not a workflow tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;v0 by Vercel.&lt;/strong&gt; Prompt to React component. Collapses design and code. Great for shadcn-heavy projects, less great when you need visual iteration separate from code iteration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lovable.&lt;/strong&gt; Full app generation. Excellent if you want an end-to-end app with auth and database. Less excellent if you want to iterate on pure UI intent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Magic Path, Framer AI, Canva Magic Studio.&lt;/strong&gt; Different flavors of the same pattern: AI accelerating existing design surfaces.&lt;/p&gt;

&lt;p&gt;Claude Design sits somewhere none of these do. It treats design as a separate phase from code, but it refuses to produce a static file as the deliverable. The output is always live. Always parametric. Always ready to hand off as design intent to a code-generating agent.&lt;/p&gt;

&lt;p&gt;This is what makes it feel different in practice. When I tested the calculator kit example, I was not dragging handles or nudging pixels. I was talking. I was dragging sliders. I was commenting on specific elements and asking Claude to revise. When I was happy, I clicked one button and handed the entire design off to Claude Code, which built a working prototype in an empty folder within minutes.&lt;/p&gt;

&lt;p&gt;No Figma file was produced at any point.&lt;/p&gt;

&lt;p&gt;And I never missed it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The .fig file was the problem
&lt;/h2&gt;

&lt;p&gt;Here is the uncomfortable thesis.&lt;/p&gt;

&lt;p&gt;For the last decade, the .fig file has been the default unit of design work. You produce one. A developer opens it, measures it, translates it into code, and ships it. The design is finished when the file is finished. The implementation is somebody else's problem.&lt;/p&gt;

&lt;p&gt;This worked when AI could not read intent. A Figma file was the most efficient way to encode design decisions that a human developer could reconstruct.&lt;/p&gt;

&lt;p&gt;That constraint just dissolved.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx79b8xg98u7m7mv83bw8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx79b8xg98u7m7mv83bw8.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When an AI model can consume a structured description of design intent and produce working code that matches it, the intermediate file stops being useful. Worse, it becomes a liability. It locks the design into a specific visual snapshot instead of leaving it parametric. It forces a translation step where information is lost. It creates a handoff culture where designers throw files over the wall and hope.&lt;/p&gt;

&lt;p&gt;Vibe design kills that handoff culture.&lt;/p&gt;

&lt;p&gt;In Claude Design, I pointed the onboarding flow at a real GitHub repository. It read my team's design system directly from the code. Every design it produced after that used the right colors, the right typography, the right components. When I handed the final design to Claude Code, Claude Code built it using those same components, because it was reading the same repo.&lt;/p&gt;

&lt;p&gt;The .fig file would have been the wrong artifact at every step in that loop. Nobody needs it. The design intent is the asset. The rendering is disposable.&lt;/p&gt;




&lt;h2&gt;
  
  
  What this means for designers
&lt;/h2&gt;

&lt;p&gt;Two things, and you need to hear both honestly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The first thing.&lt;/strong&gt; If your value as a designer is producing polished Figma files that developers rebuild in code, your value just dropped. Not tomorrow. Not in five years. Today. The AI tools are already better at producing that specific output than most mid-level designers, and they are closing the gap with senior designers fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The second thing, which matters more.&lt;/strong&gt; The designers who understand what to design, why it matters, how to frame the problem, how to guide a parametric system toward a meaningful outcome — those designers just got the largest leverage increase in the history of the craft. One senior designer with Claude Design, a real design system, and Claude Code can ship more product in a week than a five-person team could six months ago.&lt;/p&gt;

&lt;p&gt;This is the IC 2.0 thesis I have been writing and making videos about for the last year. AI collapses execution. Judgment and orchestration become the job. Vibe design is the specific form that takes inside design practice.&lt;/p&gt;

&lt;p&gt;The designers who thrive will be the ones who treat tools like Claude Design not as threats but as amplifiers of the parts of the job that always mattered: understanding users, framing problems, holding taste, knowing when something is wrong.&lt;/p&gt;

&lt;p&gt;The designers who struggle will be the ones who defend the file as sacred.&lt;/p&gt;




&lt;h2&gt;
  
  
  What this means for Figma
&lt;/h2&gt;

&lt;p&gt;Honest take, because I like Figma and I have used it every working day for years.&lt;/p&gt;

&lt;p&gt;Figma is not dying. Figma owns direct manipulation. Figma owns the collaborative file format. Figma owns the plugin ecosystem. None of that disappears in the next 12 months.&lt;/p&gt;

&lt;p&gt;But Figma's role is shrinking.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa06f15xfggkcaam5gwn9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa06f15xfggkcaam5gwn9.png" alt=" " width="800" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For five years, Figma was the center of gravity in product design. Everything started there, everything ended there, and the rest of the tools orbited it. That ended today. There is now a legitimate workflow where a design is conceived in Claude Design, refined in Claude Design, handed off to Claude Code, and shipped to production without ever touching Figma.&lt;/p&gt;

&lt;p&gt;Figma Make was Figma's attempt to catch this wave. It is a good tool, but it lives inside the Figma walled garden. It produces designs that still live as Figma files. It is a response to the last category, not the new one.&lt;/p&gt;

&lt;p&gt;If Figma does not ship a vibe-design-native product in the next 12 months, they will be defending market share in a category that is no longer the growth category. I would not bet against them. But I would not assume the default is still the default either.&lt;/p&gt;




&lt;h2&gt;
  
  
  What this means for everyone else
&lt;/h2&gt;

&lt;p&gt;Vibe design is not only for designers. The three groups who get the largest upside are not designers at all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Product managers.&lt;/strong&gt; PMs have been asking for "higher fidelity earlier" for a decade. Vibe design delivers that without requiring a designer on every feature spike. A PM can produce a wireframe, a high-fidelity mockup, or a working prototype in the same afternoon, in the company's design system, and hand it to engineering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Founders and marketers.&lt;/strong&gt; The gap between "idea" and "shareable visual" just collapsed. A landing page, a pitch deck, a campaign asset. All producible from intent, all exportable to Canva or PPTX or HTML, all without waiting for creative resource.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Engineers.&lt;/strong&gt; Engineers who want to prototype a UI without pulling a designer off another project just got a tool that produces design-system-correct output in minutes and hands off directly to their existing code workflow.&lt;/p&gt;

&lt;p&gt;Vibe design is a democratizing shift. The skill floor for producing visual work just dropped. The skill ceiling for directing visual work just got higher.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where this goes next
&lt;/h2&gt;

&lt;p&gt;Three predictions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Direct manipulation comes back.&lt;/strong&gt; Right now you cannot grab an element in Claude Design and drag it. Everything goes through chat or sliders. That will change within 6 months. The winning tool combines vibe design's parametric intent layer with Figma's direct control. Whoever ships that first owns the category for the next decade.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hidvwwnn165sxbsczsd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hidvwwnn165sxbsczsd.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mobile becomes table stakes.&lt;/strong&gt; Claude Design is web-only today. Design happens on laptops anyway, but the review, commenting, and handoff flow needs mobile. Expect mobile within 12 months.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-tool orchestration becomes the advanced workflow.&lt;/strong&gt; The designers who win in 2027 will not be "Claude Design designers" or "Figma designers." They will orchestrate across Claude Design, Figma, Stitch, v0, and whatever ships next, using each for the job it is best at. Taste, judgment, and tool orchestration become the differentiator. Fluency in any single tool becomes a commodity.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to start today
&lt;/h2&gt;

&lt;p&gt;If you have a Claude Pro, Max, Team, or Enterprise subscription, Claude Design is at claude.ai/design. Research preview.&lt;/p&gt;

&lt;p&gt;If you want to actually test vibe design rather than just read about it, here is the workflow I recommend. It is the one I used this morning.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Point Claude Design at a real GitHub repo during onboarding. Use your own project, not a demo.&lt;/li&gt;
&lt;li&gt;Describe a specific page or flow you actually need to build. Not a toy example. Real work.&lt;/li&gt;
&lt;li&gt;Start in wireframe mode. Resist the urge to jump to high fidelity. Sit with the low-fi output and iterate.&lt;/li&gt;
&lt;li&gt;Use the live sliders. They look like toys. They are not. They teach you how parametric the design system can be.&lt;/li&gt;
&lt;li&gt;Convert to high fidelity. Notice how much it already looks like your product.&lt;/li&gt;
&lt;li&gt;Hand off to Claude Code. Paste the command. Watch it build.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That loop, end to end, takes about an hour the first time. Forty minutes the second time. After that, you will start redesigning how you work.&lt;/p&gt;




&lt;p&gt;I have been a designer for 29 years. I have never been more certain that the next 10 years of design will look nothing like the last 10.&lt;/p&gt;

&lt;p&gt;Vibe design is here.&lt;/p&gt;

&lt;p&gt;Figma is not the center of gravity anymore.&lt;/p&gt;

&lt;p&gt;The designers who see that early will own the decade.&lt;/p&gt;




&lt;p&gt;If you want to see the full test end-to-end, here's the 17-minute walkthrough:&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/4q2F4zblOLQ"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Julian Oczkowski is a designer with 29 years of experience, former contractor for Adobe, IBM, and Danone, and the creator of the AI For Work YouTube channel. He publishes weekly tests of AI tools for designers, product managers, and engineers.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;YouTube: &lt;a href="https://www.youtube.com/@aiforwork_app" rel="noopener noreferrer"&gt;https://www.youtube.com/@aiforwork_app&lt;/a&gt;&lt;/em&gt;&lt;br&gt;
&lt;em&gt;LinkedIn: &lt;a href="https://www.linkedin.com/in/julianoczkowski" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/julianoczkowski&lt;/a&gt;&lt;/em&gt;&lt;br&gt;
&lt;em&gt;GitHub: &lt;a href="https://github.com/julianoczkowski" rel="noopener noreferrer"&gt;https://github.com/julianoczkowski&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>design</category>
      <category>vibecoding</category>
      <category>figma</category>
      <category>claude</category>
    </item>
    <item>
      <title>Done — Is a Lie. Your Definition of Done Is Broken in 2026.</title>
      <dc:creator>Julian Oczkowski</dc:creator>
      <pubDate>Wed, 08 Apr 2026 11:47:12 +0000</pubDate>
      <link>https://dev.to/aiforwork/done-is-a-lie-your-definition-of-done-is-broken-in-2026-501k</link>
      <guid>https://dev.to/aiforwork/done-is-a-lie-your-definition-of-done-is-broken-in-2026-501k</guid>
      <description>&lt;p&gt;You shipped the feature. The PR merged. The Figma file got handed off. The PM closed the ticket. Done, right?&lt;/p&gt;

&lt;p&gt;No. Not even close.&lt;/p&gt;

&lt;p&gt;Here is the uncomfortable truth that most teams are ignoring: if your work cannot be understood, queried, and built upon by an LLM, it is not done. It is abandoned.&lt;/p&gt;




&lt;h2&gt;
  
  
  The old definition of done was built for humans only
&lt;/h2&gt;

&lt;p&gt;For decades, "done" meant something like this: the design is signed off, the code is deployed, the stakeholders are happy, and the Jira ticket moves to the right. Maybe someone writes a confluence page. Maybe they don't. Either way, the knowledge about &lt;em&gt;why&lt;/em&gt; that feature exists, what was tried and rejected, what user research informed the decision, lives in one place: someone's head.&lt;/p&gt;

&lt;p&gt;That is tribal knowledge. And tribal knowledge has an expiration date. The person leaves. The Slack thread gets buried. The context evaporates.&lt;/p&gt;

&lt;p&gt;We have tolerated this for years because the only consumers of that knowledge were other humans, and humans are pretty good at tapping someone on the shoulder and asking. But the shoulder tap does not scale. And now there is a second consumer of your work that cannot tap anyone on the shoulder.&lt;/p&gt;

&lt;p&gt;I built this system live. Watch the full walkthrough here:&lt;/p&gt;




&lt;h2&gt;
  
  
  The new consumer: your LLM
&lt;/h2&gt;

&lt;p&gt;Every team I work with now uses an LLM somewhere in their workflow. Cursor for code. Claude Code for builds. ChatGPT for research. Copilot for pull requests. The tools vary, the pattern does not: people are pointing AI at their work and expecting it to understand.&lt;/p&gt;

&lt;p&gt;But here is the problem. When that LLM tries to understand your project, what does it find? A Figma file with no annotation. A codebase with no architecture decision records. A product spec from six months ago that never got updated after the pivot. A design system with no documented rationale.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F003iv6a3xlxaplvh23ir.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F003iv6a3xlxaplvh23ir.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The LLM cannot tap anyone on the shoulder. It can only read what you wrote down. And most teams write down almost nothing.&lt;/p&gt;

&lt;p&gt;This is why Andrej Karpathy's LLM Wiki concept, which went viral this week, resonated so hard. Karpathy described a system where you feed raw material (articles, research, documents, transcripts) into a folder, and an LLM compiles it into a structured, interlinked wiki. No vector databases. No complex infrastructure. Just markdown files that compound over time.&lt;/p&gt;

&lt;p&gt;The concept is simple: stop relying on your memory. Start building a knowledge base that an LLM can actually navigate.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Karpathy's wiki exposes about how we work
&lt;/h2&gt;

&lt;p&gt;I built this system live on my channel using Claude Code and Obsidian. The setup takes minutes. You create a vault, drop in your raw files, paste Karpathy's gist into Claude Code with a short prompt, and the LLM builds the entire wiki structure: index, log, entity pages, relationships, everything.&lt;/p&gt;

&lt;p&gt;But the interesting part is not the tooling. It is what the process reveals about how badly we document our own work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09zcckcoj8678kgx0zzy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09zcckcoj8678kgx0zzy.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I pointed Claude Code at my YouTube channel's transcripts, around 60 of them, it built a knowledge graph that surfaced connections I had never consciously made. Topics I kept returning to. Contradictions between what I said in one video versus another. Gaps where I had strong opinions but zero supporting evidence.&lt;/p&gt;

&lt;p&gt;That is what happens when you make your knowledge machine readable. The machine reads it. And it finds the holes.&lt;/p&gt;

&lt;p&gt;Now imagine this at a team level. A designer ships a component. A PM ships a feature. An engineer ships an API. Each person's "done" is isolated. The designer's rationale lives in a Figma comment. The PM's reasoning lives in a Google Doc that three people have access to. The engineer's context lives in commit messages that nobody reads.&lt;/p&gt;

&lt;p&gt;None of this is queryable. None of it compounds. None of it survives the next reorg.&lt;/p&gt;




&lt;h2&gt;
  
  
  The new definition of done
&lt;/h2&gt;

&lt;p&gt;Here is my argument, and I know it will be controversial: in 2026, "done" needs to include documentation that is structured for LLM consumption. Not just human consumption. Both.&lt;/p&gt;

&lt;p&gt;For designers, that means: your design system documents, component decisions, brand guidelines, user research notes, interview transcripts, UX teardowns, and critique feedback should live in a single, queryable knowledge base. When a new designer joins the team, they should be able to point an LLM at that knowledge base and get onboarded in hours instead of weeks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25qp3izuix5d6sddp3mk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25qp3izuix5d6sddp3mk.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For product managers, that means: your PRDs, decision logs, stakeholder interview notes, sprint retros, OKRs, metrics, customer feedback, NPS scores, and competitor teardowns should be in one place. Not scattered across Notion, Google Docs, Confluence, and someone's personal notes app. When the next PM picks up your product area, they should be able to query the LLM and understand not just what was shipped, but why.&lt;/p&gt;

&lt;p&gt;Think about what that actually means. It means "done" is not just the artifact you shipped. It is the artifact plus the reasoning, documented in a format that both humans and machines can traverse.&lt;/p&gt;

&lt;p&gt;If you shipped the feature but the "why" only exists in your head, you are not done. You are a single point of failure.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why knowledge graphs change everything
&lt;/h2&gt;

&lt;p&gt;There is a reason Karpathy's system uses interlinked markdown files and not a flat folder of documents. The structure matters. The relationships between pieces of knowledge are often more valuable than the pieces themselves.&lt;/p&gt;

&lt;p&gt;When I ran the ingest on my YouTube transcripts, the LLM did not just create 15 isolated summary pages. It created a graph. Concepts linked to other concepts. Entities referenced across multiple sources. Relationships that I, the person who created all of that content, had never explicitly drawn.&lt;/p&gt;

&lt;p&gt;This is the fundamental insight behind knowledge graphs: information in isolation is data. Information with relationships is knowledge. And knowledge compounds in ways that data never can.&lt;/p&gt;

&lt;p&gt;Think about how a traditional design system works. You have a component library. You have usage guidelines. You have accessibility notes. Maybe you have decision logs explaining why a certain component was built a certain way. In most organizations, these live in separate tools, maintained by separate people, with no explicit connections between them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnj19dfc8ujk157b7q2cw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnj19dfc8ujk157b7q2cw.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now imagine all of that ingested into a wiki where the LLM has built relationships between the component, the user research that informed it, the accessibility standard it satisfies, the sprint retro where the team debated the interaction pattern, and the customer feedback that triggered the redesign. That is not a design system. That is a design knowledge graph. And when a new designer joins and asks "why does our modal work this way?", the LLM does not just retrieve a usage guideline. It traces the full chain of reasoning.&lt;/p&gt;

&lt;p&gt;This is what Obsidian's graph view makes visible. You can literally see clusters of tightly connected knowledge, isolated nodes that should be linked but are not, and structural gaps where your team has strong opinions but weak documentation. The visual representation alone surfaces problems that no amount of Confluence searching would reveal.&lt;/p&gt;

&lt;p&gt;The same principle applies to product work. A PM's decision to prioritize feature A over feature B involves competitive analysis, customer interviews, metrics, stakeholder input, and strategic alignment. In most teams, that decision lives as a single line in a roadmap tool. The reasoning behind it is scattered across five tools and three people's memories. In a knowledge graph, all of those inputs are linked to the decision node. The "why" is not just documented. It is structurally connected to everything that informed it.&lt;/p&gt;

&lt;p&gt;This is not academic. This is practical. When your LLM has access to a knowledge graph instead of a flat folder, its answers go from generic to specific. It stops saying "best practices suggest..." and starts saying "based on the user research from Q2 and the decision log from sprint 14, this component was designed to..."&lt;/p&gt;

&lt;p&gt;That is the difference between an LLM that sounds helpful and an LLM that actually is.&lt;/p&gt;




&lt;h2&gt;
  
  
  The second brain is not a personal productivity hack
&lt;/h2&gt;

&lt;p&gt;The "second brain" concept has been around for years, popularized by Tiago Forte and the personal knowledge management community. But most people think of it as a note-taking system. A way to organize your Evernote or your Notion. A personal productivity hack.&lt;/p&gt;

&lt;p&gt;That framing is too small for what is happening now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nmxklshtj29tgjsbvbf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nmxklshtj29tgjsbvbf.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When Karpathy described his system, he was not talking about personal note-taking. He was describing a fundamentally different relationship between a human and their accumulated knowledge. In his setup, the LLM has built and maintains a wiki of over 100 articles and 400,000 words from his research. He rarely edits it directly. The LLM writes, updates, links, and maintains the entire thing. His role is to feed it raw material and ask it questions.&lt;/p&gt;

&lt;p&gt;This is not a note-taking app. It is an externalized, queryable, self-maintaining extension of your professional memory.&lt;/p&gt;

&lt;p&gt;And the implications for teams are enormous. Consider what happens when every designer, every PM, and every engineer on a team maintains a second brain that feeds into a shared wiki. The team's collective knowledge stops being trapped in individual heads and starts existing as a persistent, queryable artifact.&lt;/p&gt;

&lt;p&gt;New hire onboarding goes from "shadow Sarah for two weeks" to "point your LLM at the team wiki and ask it anything." Post-mortem insights stop being a Google Doc that nobody reads and start being connected nodes in a knowledge graph that inform every future decision. Design critique feedback stops evaporating after the meeting and starts compounding into a searchable body of design reasoning.&lt;/p&gt;

&lt;p&gt;The second brain concept matters here because it reframes documentation from a chore into an investment. Every raw file you drop into the wiki, every transcript, every research note, every decision log, makes the entire system smarter. The knowledge compounds. The connections multiply. The LLM gets better at answering questions about your work because there is more context for it to draw from.&lt;/p&gt;

&lt;p&gt;This is compounding returns applied to organizational knowledge. And like any compounding system, the teams that start early will have an exponential advantage over those that start late.&lt;/p&gt;




&lt;h2&gt;
  
  
  The gap between how we work and how LLMs work
&lt;/h2&gt;

&lt;p&gt;Here is a problem most teams have not thought about yet. LLMs do not work the way humans work. Humans can hold ambiguity, read between the lines, infer context from a half-finished Slack message, and fill gaps with institutional memory they have absorbed over months of working in the same building. LLMs cannot do any of that.&lt;/p&gt;

&lt;p&gt;An LLM reads what you give it. Literally. If your design system documentation says "use the primary button for the main action" but does not define what constitutes a "main action" in your product's specific context, the LLM will give you a generic answer. If your PRD says "we decided to go with option B" but does not explain what option A was or why it was rejected, the LLM cannot reconstruct that reasoning.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkdp87by87r9dmhafjcsw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkdp87by87r9dmhafjcsw.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most documentation is written for humans who already have context. It assumes shared understanding. It relies on the reader knowing who "the team" refers to, what "the incident" was, and why "that approach" was rejected. These are invisible references that a human colleague can decode but an LLM cannot.&lt;/p&gt;

&lt;p&gt;This is why a wiki with explicit relationships matters so much. When the LLM builds the knowledge graph, it forces implicit knowledge to become explicit. Every entity gets a page. Every relationship gets a link. Every decision gets connected to its inputs. The ambiguity that humans navigate effortlessly becomes structured knowledge that machines can traverse.&lt;/p&gt;

&lt;p&gt;Karpathy's gist includes tools for exactly this problem. The health check scans the wiki for unlinked entities, concepts that are mentioned but never defined, references that point nowhere. The lint pass finds inconsistencies, articles that contradict each other, claims that lack supporting sources. These are not just maintenance tools. They are quality controls on your team's collective knowledge.&lt;/p&gt;

&lt;p&gt;And here is the part that should make you uncomfortable: if you ran a health check on your team's existing documentation right now, how many gaps would it find? How many undefined concepts? How many decisions with no recorded reasoning? How many references to conversations that happened six months ago and were never written down?&lt;/p&gt;

&lt;p&gt;That gap between what your team knows and what your team has documented is the exact gap that makes your LLM useless.&lt;/p&gt;




&lt;h2&gt;
  
  
  From personal wiki to team knowledge infrastructure
&lt;/h2&gt;

&lt;p&gt;One concern I hear often is: "This is cool for one person, but how does it work for a team?" Karpathy's original system is local. One person, one vault, one LLM.&lt;/p&gt;

&lt;p&gt;But the solution is straightforward. Put the wiki in a GitHub repository. Everyone on the team can contribute raw files. When someone has a new research finding, a meeting transcript, a design decision, they drop it into the raw folder and push. Then anyone on the team can run the ingest, and the LLM updates the shared wiki with the new information, linking it to everything that already exists.&lt;/p&gt;

&lt;p&gt;This is exactly what I have done with my own YouTube channel wiki. The raw files, the images, the entire compiled wiki lives in a repository. If I had collaborators, they could contribute to the same knowledge base. The LLM handles the compilation. The humans handle the curation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fso25m3omrwqdbi5o9n8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fso25m3omrwqdbi5o9n8b.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For teams, this means the wiki becomes a living artifact that grows with every sprint, every research round, every design review. It is not a documentation project that someone has to champion. It is a byproduct of doing the work, as long as you build the habit of capturing the reasoning alongside the output.&lt;/p&gt;

&lt;p&gt;The GitHub model also solves the versioning problem. You can see what was added when, by whom, and how the LLM integrated it into the existing structure. You have full history. You have accountability. And you have a knowledge base that no single person's departure can destroy.&lt;/p&gt;




&lt;h2&gt;
  
  
  "But we already document things"
&lt;/h2&gt;

&lt;p&gt;No, you don't. Not really.&lt;/p&gt;

&lt;p&gt;Most teams have documentation the same way most people have a gym membership. It exists. It was set up with good intentions. Nobody uses it consistently.&lt;/p&gt;

&lt;p&gt;And even when documentation exists, it is almost never structured for LLM consumption. It is buried in tools with terrible search. It is written in a format that assumes the reader already has context. It is never updated after the initial write.&lt;/p&gt;

&lt;p&gt;Karpathy's wiki pattern solves this because the LLM maintains the structure. You do not have to be disciplined about formatting or linking. You drop raw material in, and the LLM does the compilation. It creates the relationships. It maintains the index. It runs health checks to find gaps and inconsistencies.&lt;/p&gt;

&lt;p&gt;The discipline you need is simpler: make a habit of capturing your reasoning, not just your output.&lt;/p&gt;




&lt;h2&gt;
  
  
  The tribal knowledge problem is now an AI problem
&lt;/h2&gt;

&lt;p&gt;Here is where this gets urgent. Teams are already using LLMs to make decisions. Engineers ask Cursor about the codebase. PMs ask Claude to analyze user feedback. Designers ask ChatGPT about accessibility patterns.&lt;/p&gt;

&lt;p&gt;But those LLMs are operating with incomplete context. They do not know your company's design principles. They do not know why you chose that specific API architecture. They do not know about the user research that killed the original feature direction.&lt;/p&gt;

&lt;p&gt;So the LLM gives generic answers. And people ship generic work. And then everyone wonders why AI is not delivering on its promise.&lt;/p&gt;

&lt;p&gt;The problem is not the AI. The problem is that your organizational knowledge is inaccessible to it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What to do about it
&lt;/h2&gt;

&lt;p&gt;This is not theoretical. You can start today.&lt;/p&gt;

&lt;p&gt;Set up an Obsidian vault. Install the Web Clipper extension. Open Claude Code in the vault directory. Paste Karpathy's gist with a simple prompt, and the LLM will build your wiki structure in minutes. Then start feeding it: design specs, meeting notes, research findings, decision logs. Everything you would want a new team member to know.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fceu81py8u0wquy2merhj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fceu81py8u0wquy2merhj.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The repo for the setup I demonstrated is public: &lt;a href="https://github.com/julianoczkowski/karpathy-llm-wiki" rel="noopener noreferrer"&gt;github.com/julianoczkowski/karpathy-llm-wiki&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have the wiki, you can plug it into anything: Claude projects, Cursor, Windsurf, Copilot. It is just files. That is the beauty of it. No vendor lock-in, no complex infrastructure, no monthly subscription. Just markdown that compounds.&lt;/p&gt;

&lt;p&gt;And once you start, the benefits compound fast. Each new source you ingest does not just add one more document. It adds connections to every existing document. A user research transcript from today links to the design decision from three months ago, which links to the competitive analysis from last quarter. The knowledge graph gets denser, richer, and more useful with every addition.&lt;/p&gt;

&lt;p&gt;This is why I think about the wiki less as a tool and more as a habit. The same way a developer commits code daily, a designer or PM should commit knowledge daily. Not polished documentation. Not formatted reports. Just raw reasoning, dropped into a folder, compiled by an LLM that never forgets.&lt;/p&gt;

&lt;p&gt;The teams that build this habit will have a structural advantage that grows over time. Their LLMs will give better answers because they have better context. Their new hires will onboard faster because the institutional memory is queryable. Their decisions will be more consistent because the reasoning behind past decisions is always accessible.&lt;/p&gt;

&lt;p&gt;The teams that do not build this habit will keep losing knowledge every time someone goes on holiday, changes teams, or leaves the company. They will keep asking the same questions, making the same mistakes, and wondering why their AI tools give such generic output.&lt;/p&gt;




&lt;h2&gt;
  
  
  The uncomfortable question
&lt;/h2&gt;

&lt;p&gt;Next time you close a ticket or hand off a design, ask yourself: could an LLM reconstruct the reasoning behind this decision from what I have documented?&lt;/p&gt;

&lt;p&gt;If the answer is no, you are not done.&lt;/p&gt;

&lt;p&gt;The definition of done in 2026 is not just "shipped." It is shipped, documented, and queryable. By humans and machines alike.&lt;/p&gt;

&lt;p&gt;The teams that figure this out first will compound their knowledge. The teams that don't will keep starting from scratch, every sprint, every hire, every reorg.&lt;/p&gt;

&lt;p&gt;Your choice.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Julian Oczkowski is a designer with 29 years of experience, including work with Adobe, IBM, and Danone. He runs AI For Work, a YouTube channel focused on practical AI workflows for designers, PMs, and engineers.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Watch the full LLM Wiki build tutorial:&lt;/em&gt; &lt;a href="https://youtu.be/lnsExa1UbnM" rel="noopener noreferrer"&gt;youtu.be/lnsExa1UbnM&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Subscribe for weekly AI workflow tests:&lt;/em&gt; &lt;a href="https://www.youtube.com/@aiforwork_app?sub_confirmation=1" rel="noopener noreferrer"&gt;youtube.com/@aiforwork_app&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tags:&lt;/strong&gt; AI, Second Brain, Product Management, UX Design, Product Design&lt;/p&gt;

</description>
      <category>ai</category>
      <category>uxdesign</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Vibe Coding Is Dead. Orchestration Is What Comes Next.</title>
      <dc:creator>Julian Oczkowski</dc:creator>
      <pubDate>Sun, 05 Apr 2026 13:31:00 +0000</pubDate>
      <link>https://dev.to/aiforwork/vibe-coding-is-dead-orchestration-is-what-comes-next-1h64</link>
      <guid>https://dev.to/aiforwork/vibe-coding-is-dead-orchestration-is-what-comes-next-1h64</guid>
      <description>&lt;h2&gt;
  
  
  How Cursor 3, Codex, and a wave of new tools are proving that the future of software development is not writing code. It is managing the agents that do.
&lt;/h2&gt;




&lt;h2&gt;
  
  
  Vibe coding was only phase one
&lt;/h2&gt;

&lt;p&gt;For the past year, vibe coding has been the story. One person, one AI agent, one task. You write a prompt, the agent writes the code, you review it, and you ship. Repeat.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzy8z12gi4i95r7ckhnf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzy8z12gi4i95r7ckhnf.png" alt=" " width="800" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It worked. It proved something that a lot of people did not believe was possible: that non-engineers, designers, product managers, and founders could build real software with AI as their collaborator. Tools like Cursor, Claude Code, and Codex made this accessible. The barrier to building dropped to nearly zero.&lt;/p&gt;

&lt;p&gt;But vibe coding hit a ceiling.&lt;/p&gt;

&lt;p&gt;One agent at a time does not scale. You are still the bottleneck. You prompt, you wait, you review, you prompt again. It is faster than writing code yourself, but the loop is still sequential. You are still working on one thing at a time, the same way developers have worked for decades. The tool changed. The workflow did not.&lt;/p&gt;

&lt;p&gt;That is about to change.&lt;/p&gt;




&lt;h2&gt;
  
  
  The orchestration shift
&lt;/h2&gt;

&lt;p&gt;Something different is happening in 2026. The role of the builder is shifting from writing prompts to orchestrating agents.&lt;/p&gt;

&lt;p&gt;Instead of running one agent on one task, you run five agents in parallel across different parts of your project. One is refactoring your navigation. Another is building a new API endpoint. A third is writing tests. You are not writing code. You are not even prompting in the traditional sense. You are defining outcomes, assigning agents, reviewing their work, and deciding what ships.&lt;/p&gt;

&lt;p&gt;This is not theory. This is what the latest generation of tools is shipping right now. And they are all converging on the same idea: the future of software development is not about writing code. It is about managing the agents that do.&lt;/p&gt;

&lt;p&gt;The skill that matters is no longer syntax. It is judgment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomgpo2l1ugi6ak5gjyfz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomgpo2l1ugi6ak5gjyfz.png" alt=" " width="800" height="597"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The tools converging on the same idea
&lt;/h2&gt;

&lt;p&gt;I have been testing AI coding tools for over a year, and something became obvious in the last few weeks. Every major player is building the same thing: an orchestration layer for agents.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cursor.com/blog/cursor-3" rel="noopener noreferrer"&gt;Cursor 3&lt;/a&gt; dropped on April 2nd with a completely rebuilt interface called the Agents Window. It lets you run multiple agents in parallel across repos and environments. You can see all your agents in one sidebar, kick them off from desktop, mobile, Slack, or GitHub, and manage the entire flow from prompt to merged PR without leaving the app. I tested it on a real project and ran three agents simultaneously on the same application. All three finished while I was still typing notes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://chatgpt.com/codex" rel="noopener noreferrer"&gt;Codex&lt;/a&gt; is OpenAI's version. Similar layout, similar concept. Launch agents, review their work, ship. The limitation is that you are locked into OpenAI's models only.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://conductor.build/" rel="noopener noreferrer"&gt;Conductor&lt;/a&gt; just raised $22M in a Series A. It is a visual multi-agent environment that lets you use Claude Code and Codex subscriptions inside their interface.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/t3dotgg" rel="noopener noreferrer"&gt;T3 Code&lt;/a&gt; is Theo's open-source answer to the same problem. A desktop GUI that wraps Claude Code and Codex, giving you multi-repo parallel agents, git worktree integration, and commit-and-push from the UI. Free, no subscription on top of your existing API costs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://superset.sh/" rel="noopener noreferrer"&gt;Superset&lt;/a&gt; is a newer entrant solving the same problem from slightly different angles.&lt;/p&gt;

&lt;p&gt;They all look similar because they are all solving the same problem: one person needs to manage many agents at once, and the terminal is not enough to do that.&lt;/p&gt;

&lt;p&gt;I tested Cursor 3 on a real project and documented everything, the features that work, the ones that are broken, and the honest verdict. Watch the full breakdown here: &lt;a href="https://youtu.be/AAGmJAvec9o?si=tACNpGVRltnL69y4" rel="noopener noreferrer"&gt;I Tested Cursor 3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/AAGmJAvec9o"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;




&lt;h2&gt;
  
  
  Design Mode: why visual feedback changes who can orchestrate
&lt;/h2&gt;

&lt;p&gt;Most AI coding tools are built for developers. Terminal-first, text-only. You describe what you want in words, and the agent interprets your description. This works if you think in code. It breaks if you think visually.&lt;/p&gt;

&lt;p&gt;Cursor 3 shipped a feature called Design Mode that changes this. You open the integrated browser, toggle Design Mode with Cmd+Shift+D, and click on any UI element. Cursor grabs the code and a screenshot, attaches them to the agent chat, and you type what you want changed.&lt;/p&gt;

&lt;p&gt;No more writing "the button on the right side of the card component." You just point at it.&lt;/p&gt;

&lt;p&gt;I have been designing interfaces for 29 years. This is the first AI coding feature that felt like it was built for someone who thinks visually.&lt;/p&gt;

&lt;p&gt;It is not perfect. On Windows, Design Mode is broken. Some annotations lose their text when you send them to the chat. It is early. But the direction matters more than the current state: visual feedback opens orchestration to designers and product managers, not just engineers.&lt;/p&gt;

&lt;p&gt;The people who can look at an interface, spot what is wrong, and articulate why now have a direct line to the agents that fix it. That is a significant shift in who gets to build.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I learned running three agents at once
&lt;/h2&gt;

&lt;p&gt;I tested Cursor 3 on a character image generator I built with Claude Code. It is a Vue application that uses Google's Imagen models to create illustrated characters for my YouTube thumbnails.&lt;/p&gt;

&lt;p&gt;I ran three agents in parallel on the same project:&lt;/p&gt;

&lt;p&gt;Agent one replaced the plain text header with an SVG logo. Agent two added a copy-to-clipboard button so I could paste generated images directly into Figma. Agent three built an image preview dialog with metadata, showing the full-size image, the prompt used to generate it, and copy and delete controls.&lt;/p&gt;

&lt;p&gt;All three agents worked on the same codebase at the same time. All three finished while I was still reviewing the first one's output.&lt;/p&gt;

&lt;p&gt;The bottleneck was not the agents. It was me. My ability to review three sets of changes, decide which ones to accept, and catch the things the agents got wrong. That is the new skill: judgment under speed.&lt;/p&gt;

&lt;p&gt;The agents can write code faster than any human. But they cannot tell you whether the spacing feels right, whether the button hierarchy makes sense, or whether the feature actually solves the problem you set out to solve. That part is still yours.&lt;/p&gt;




&lt;h2&gt;
  
  
  What breaks when you orchestrate
&lt;/h2&gt;

&lt;p&gt;The tools work. What breaks is you.&lt;/p&gt;

&lt;p&gt;Running three agents in parallel sounds productive until you try to review all three outputs at the same time. Each agent made dozens of changes across multiple files. Each one made decisions I did not explicitly ask for. Agent three added copy and delete buttons to the preview dialog without being prompted. That was a good decision. But I had to verify that it was a good decision, and I had to do that while also reviewing the other two agents' work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtdotg6ys9ne8i95j6f9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtdotg6ys9ne8i95j6f9.png" alt=" " width="800" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the cognitive load problem that nobody is talking about. Vibe coding with one agent is manageable. You prompt, you review, you move on. Orchestrating five or ten agents is a fundamentally different cognitive task. You are not coding anymore. You are managing a team that works at machine speed and has no concept of priority.&lt;/p&gt;

&lt;p&gt;Every agent produces output that looks confident. None of them flag uncertainty. None of them say "I was not sure about this part, you should check it." They just do the work and present it as done. Your job is to find the mistakes they did not tell you about, across multiple parallel workstreams, while deciding which changes to keep, which to rework, and which to throw away entirely.&lt;/p&gt;

&lt;p&gt;The bottleneck is not compute. It is attention.&lt;/p&gt;

&lt;p&gt;I found myself developing a pattern: kick off the agents, let them all finish, then review sequentially rather than trying to watch them work in real time. Watching agents code live is satisfying but counterproductive. The value is in the review, not the observation.&lt;/p&gt;

&lt;p&gt;The other pattern that helped was scoping each agent tightly. &lt;br&gt;
"Replace the header with an SVG logo" is better than "improve the header area." Vague prompts in a single-agent workflow just produce vague output. Vague prompts in a multi-agent workflow produce vague output multiplied by the number of agents, and you have to untangle all of it.&lt;/p&gt;

&lt;p&gt;Orchestration scales your output. It also scales the cost of poor judgment. If you assign five agents with unclear intent, you do not get five times the productivity. You get five sets of changes you do not fully understand, and the merge becomes the hardest part of the entire workflow.&lt;/p&gt;

&lt;p&gt;The skill is not running more agents. It is knowing exactly what to ask each one to do, and being ruthless about what you accept when they are done.&lt;/p&gt;




&lt;h2&gt;
  
  
  The IC 2.0 argument
&lt;/h2&gt;

&lt;p&gt;There is a bigger story here than any single tool.&lt;/p&gt;

&lt;p&gt;For the past two years, the conversation about AI and work has been stuck on a single question: will AI replace my job? That is the wrong question. The right question is: what does my job become when AI handles the execution?&lt;/p&gt;

&lt;p&gt;I call it IC 2.0. The individual contributor who thrives in this shift is not the one who writes the best prompts or knows the most keyboard shortcuts. It is the one who brings process, context, and judgment to agent orchestration.&lt;/p&gt;

&lt;p&gt;Tasks get automated. Purpose does not.&lt;/p&gt;

&lt;p&gt;The designer who can look at a generated UI and immediately see that the visual hierarchy is wrong, that is judgment. The product manager who can review three parallel agent outputs and pick the one that actually solves the user's problem, that is judgment. The engineer who can spot that the agent's refactor introduced a subtle race condition, that is judgment.&lt;/p&gt;

&lt;p&gt;Twenty-nine years of design experience taught me what no model can learn: when something is wrong and why. That instinct is not automatable. It is the thing that makes orchestration work.&lt;/p&gt;

&lt;p&gt;The shift is not from human to AI. It is from execution to accountability. The IC 2.0 does not write every line of code. They own every outcome.&lt;/p&gt;




&lt;h2&gt;
  
  
  What to do right now
&lt;/h2&gt;

&lt;p&gt;If you are reading this and wondering where to start, here are three things you can do today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pick one orchestration tool and learn the loop.&lt;/strong&gt; It does not matter if it is Cursor, T3 Code, Conductor, or just two terminal windows running Claude Code side by side. The important thing is to get comfortable with the cycle: prompt, review, decide, ship. Run it on a real project, not a tutorial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start running two agents in parallel, even on small tasks.&lt;/strong&gt; The muscle you need to build is not prompting. It is context switching between agent outputs and making fast decisions about what to keep and what to throw away. This is uncomfortable at first. That discomfort is the learning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stop optimising your prompts. Start optimising your judgment.&lt;/strong&gt; The difference between a good orchestrator and a bad one is not the quality of their prompts. It is the quality of their decisions after the agents finish. Can you spot the subtle bug? Can you tell which of three implementations is the most maintainable? Can you decide in 30 seconds whether to ship or rework?&lt;/p&gt;

&lt;p&gt;That is where the leverage is. Not in the typing. In the thinking.&lt;/p&gt;




&lt;p&gt;If you want to see more AI workflow tests and tool comparisons from a designer's perspective, subscribe to AI For Work on YouTube.&lt;/p&gt;

&lt;p&gt;I also open-sourced a set of designer skills for Claude Code and Cursor: &lt;a href="https://github.com/julianoczkowski/designer-skills" rel="noopener noreferrer"&gt;designer-skills on GitHub.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also built an MCP server for NotebookLM that lets AI agents interact with your notebooks directly: &lt;a href="https://github.com/julianoczkowski/notebooklm-mcp-2026" rel="noopener noreferrer"&gt;notebooklm-mcp-2026 on GitHub.&lt;/a&gt; &lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
      <category>vibecoding</category>
    </item>
    <item>
      <title>7 Claude Code Design Skills That Follow a Real Design Process</title>
      <dc:creator>Julian Oczkowski</dc:creator>
      <pubDate>Tue, 31 Mar 2026 09:23:36 +0000</pubDate>
      <link>https://dev.to/aiforwork/7-claude-code-design-skills-that-follow-a-real-design-process-4f63</link>
      <guid>https://dev.to/aiforwork/7-claude-code-design-skills-that-follow-a-real-design-process-4f63</guid>
      <description>&lt;p&gt;I have been designing for 29 years. I have worked as an individual contributor, a design manager, and a dev team lead across enterprise companies like Adobe, IBM, and Danone. I have seen tools come and go. Sketch replaced Photoshop. Figma replaced Sketch. Each time, the tooling changed but the process underneath stayed the same.&lt;/p&gt;

&lt;p&gt;AI is different. AI is not just replacing the tool. It is replacing the process. And most people are skipping straight to the fun part, typing a prompt and watching code appear, without thinking about what comes before.&lt;/p&gt;

&lt;p&gt;That is how you get faster chaos.&lt;/p&gt;




&lt;h2&gt;
  
  
  The problem with "prompt and pray"
&lt;/h2&gt;

&lt;p&gt;Open Claude Code. Type "build me a dashboard." Watch it generate something. It looks decent. Maybe even impressive.&lt;/p&gt;

&lt;p&gt;But ask yourself: does it solve the right problem? Does it match the user's mental model? Are the information architecture decisions intentional or random? Are the design tokens consistent or did the LLM just pick whatever looked good in the moment?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ogj9igdig243eeaacpn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ogj9igdig243eeaacpn.png" alt="Claude Code generating a dashboard with no design process" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most of the time, the answer is no. The output looks like software but does not feel like software. It is missing the invisible work that separates a prototype from a product.&lt;/p&gt;

&lt;p&gt;That invisible work is the design process. Requirements gathering. Design briefs. Information architecture. Design tokens. Task decomposition. Review cycles. It is not glamorous. It is not the part people post on X. But it is the part that makes everything else work.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I built: 7 Claude Code skills that follow a real design process
&lt;/h2&gt;

&lt;p&gt;Instead of fighting this, I decided to encode the process into Claude Code itself. I built 7 custom skills that you can install and run in any project. They follow a professional design workflow from vague idea to working, accessible, reviewed frontend.&lt;/p&gt;

&lt;p&gt;Here is the full flow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhk4wmk2gz50gni107qk.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhk4wmk2gz50gni107qk.webp" alt="Diagram showing the 7 skill workflow from Grill Me to Design Review" width="660" height="918"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Grill Me
&lt;/h3&gt;

&lt;p&gt;This is the most important skill and probably the shortest one in terms of code. The LLM becomes relentless. It stress-tests your requirements before a single line of code gets written.&lt;/p&gt;

&lt;p&gt;You might spend 20 minutes answering questions. That is the point. If you cannot articulate what you are building and why, the LLM should not be building it for you. This skill uses decision trees to probe deeper based on your answers: what type of application, who are the users, what is the scale, what are the edge cases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhk4ontptqt2oww9jtniw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhk4ontptqt2oww9jtniw.png" alt="Grill Me skill stress-testing requirements in Claude Code terminal" width="800" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The output is a grill summary that captures everything you discussed. Think of it as the requirements handshake between you and the LLM.&lt;/p&gt;

&lt;p&gt;Credit where it is due: the grill-me concept was inspired by Matt Pocock's skills work at &lt;a href="https://github.com/mattpocock/skills" rel="noopener noreferrer"&gt;github.com/mattpocock/skills&lt;/a&gt;. I took the idea and adapted it for a designer's workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Design Brief
&lt;/h3&gt;

&lt;p&gt;Once the grill is done, the LLM generates a design brief. But it does not just summarise your answers. It looks through the actual codebase. It checks for existing components, design systems, patterns already in the repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F538so07md6qjvbehdbcf.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F538so07md6qjvbehdbcf.webp" alt="Design brief generated by Claude Code with emotional tone and visual references" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before creating the brief, it asks design-specific questions. Not just what the application needs to do, but how you want it to feel. Emotional tone. Visual inspiration. It might suggest references like Linear or the Google Admin Console based on your domain.&lt;/p&gt;

&lt;p&gt;The result is a proper design brief document saved to your project.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Information Architecture
&lt;/h3&gt;

&lt;p&gt;The LLM goes through all the documentation generated so far and structures the information architecture. Pages, navigation patterns, content hierarchy. If the feature is complex, like a multi-page flow or a full navigation redesign, it will ask additional clarifying questions before committing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlhir5ogszdqozcs926y.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlhir5ogszdqozcs926y.webp" alt="Information architecture output showing page structure and navigation" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Design Tokens
&lt;/h3&gt;

&lt;p&gt;If your project does not already have a design system with components in the repository, this skill generates a complete set of design tokens as CSS custom properties. Colours, typography, spacing, elevation, border radius. Everything saved to a theme file that the frontend skill will consume later.&lt;/p&gt;

&lt;p&gt;If you already have a design system, this step gets skipped automatically. The brief skill detected it earlier.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Brief to Tasks
&lt;/h3&gt;

&lt;p&gt;This is where the design brief gets broken down into actionable tasks. The skill reads all the documentation, identifies dependencies between tasks, and creates a separate markdown file that tracks every task with its status.&lt;/p&gt;

&lt;p&gt;Foundation tasks first, then core UI, then responsive and polish. Each task has a clear scope and the LLM knows what order to execute them in.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Frontend Design
&lt;/h3&gt;

&lt;p&gt;This is the build phase. The LLM uses the design brief, information architecture, design tokens, and task list to generate the actual frontend. Components, pages, layouts. Not random code. Intentional code that follows the decisions made in the previous five steps.&lt;/p&gt;

&lt;p&gt;The output is a working application. In my demo, I started with nothing more than "I want an asset management application" and ended up with a complete IT asset management tool with a dashboard, asset tracking, categories, reports, filtering, sorting, empty states, edit dialogs, and proper navigation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawwmkevhr7tpftzkov0d.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawwmkevhr7tpftzkov0d.webp" alt="Generated IT asset management dashboard with charts and data tables" width="800" height="631"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Design Review
&lt;/h3&gt;

&lt;p&gt;Once the frontend is generated, the design review skill analyses the output. You can either paste screenshots into Claude Code manually or, better yet, use a Playwright MCP server to automate the entire review.&lt;/p&gt;

&lt;p&gt;With Playwright, Claude Code opens a headless browser, navigates through the application, takes screenshots of every page, and then runs the design review skill against those screenshots autonomously. No manual work.&lt;/p&gt;

&lt;p&gt;The review catches things like sparse layouts, incorrect chart ordering, missing dark mode considerations, and accessibility gaps. It then proposes specific changes and can apply them directly.&lt;/p&gt;




&lt;h2&gt;
  
  
  The result
&lt;/h2&gt;

&lt;p&gt;From a vague one-line prompt to a working application with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;91 Lighthouse performance score&lt;/li&gt;
&lt;li&gt;100 Lighthouse accessibility score&lt;/li&gt;
&lt;li&gt;Proper information architecture&lt;/li&gt;
&lt;li&gt;Consistent design tokens&lt;/li&gt;
&lt;li&gt;Documented design brief and task tracking&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not because the LLM is magic. Because it followed a process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbaebhas26csb9chfos9.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbaebhas26csb9chfos9.webp" alt="Lighthouse scores showing 91 performance and 100 accessibility" width="800" height="591"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why this matters for designers right now
&lt;/h2&gt;

&lt;p&gt;The old design process, or the pseudo-process that most teams actually followed, is gone. AI tools are getting better and faster every month. But speed without direction just means you waste time faster.&lt;/p&gt;

&lt;p&gt;I have seen this from both sides. As an IC shipping work, and as a manager reviewing it. The people who will thrive with AI tools are the ones who bring process, judgment, and accountability. The tools handle execution. You handle the "what" and the "why."&lt;/p&gt;

&lt;p&gt;That is what these skills encode. Not shortcuts. A professional design process that happens to run inside a terminal.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try it yourself
&lt;/h2&gt;

&lt;p&gt;The skills are free and open source. Install them in any Claude Code project with a single command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx skills add julianoczkowski/designer-skills

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GitHub repo: &lt;a href="https://github.com/julianoczkowski/designer-skills" rel="noopener noreferrer"&gt;github.com/julianoczkowski/designer-skills&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Watch the full walkthrough:&lt;br&gt;
  &lt;iframe src="https://www.youtube.com/embed/1pV7bvbaCFg"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;I walk through the entire process from start to finish in a video on my channel, AI For Work. If you want to see every step running live, including the Playwright MCP autonomous review, it is all there.&lt;/p&gt;

&lt;p&gt;If you are a designer, whether you are just starting out or you have been doing this for decades, the shift is happening. The question is not whether AI will change your work. It is whether you will bring process to the chaos or just let the chaos run faster.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Julian Oczkowski is a designer with 29 years of experience across enterprise companies, startups, and agencies. He runs the &lt;a href="https://www.youtube.com/@aiforwork_app" rel="noopener noreferrer"&gt;AI For Work&lt;/a&gt; YouTube channel, covering AI tool comparisons, workflow demos, and practical builds for designers and individual contributors.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>vibecoding</category>
    </item>
  </channel>
</rss>
