DEV Community

Cover image for I turned my private UI workflow into an open-source Agent Skill
carson
carson

Posted on

I turned my private UI workflow into an open-source Agent Skill

For a long time, this wasn't a skill.

npx skills add carson2222/skills --skill design-exploration

It was just a workflow I kept in my head.

Every time I used AI for frontend work, I ran into the same problem: the result often looked decent, but it rarely felt genuinely designed. Too much of it landed in the same visual territory - safe layouts, familiar patterns, polished but forgettable output.

So I started adjusting the way I worked.

Instead of asking for one result and trying to polish it, I pushed the agent to explore multiple real directions at once. Not tiny visual tweaks. Real variants with different structure, hierarchy, tone, and emphasis.

That changed a lot.

It made the process feel closer to actual design exploration instead of autocomplete with Tailwind.

Over time I kept refining that workflow. I changed what I asked for, what I forbade, how I framed constraints, how I handled existing design systems, and how I compared options. Eventually it became repeatable enough that it stopped feeling like "a good prompt" and started feeling like a tool.

That's where design-exploration came from.

A shoutout is deserved here too: one of Theo's design prompts was an early spark for this line of thinking. The idea of having the agent generate multiple variants instead of obsessing over one first try was a strong starting point. But the version I use today is something I tuned heavily through repeated real use.

Another reason I finally packaged it properly is simple: people kept asking for it.

Friends and coworkers saw the kind of output I was getting and wanted to try the same workflow themselves. At some point it became obvious that keeping it as a private mental prompt was silly. Wrapping it as a skill made it easier to reuse, easier to share, and easier to improve.

I mostly use it with OpenCode and Opus 4.6. That combination has been the most reliable for me by far. Other models can absolutely produce strong UI, but they tend to be less predictable. For this kind of work, predictability matters almost as much as quality.

One important thing, though: this is not magic.

You still need to give the model context. You still need to explain what the product is, who it is for, what kind of feeling you want, and what tradeoffs matter. If you have assets, references, or constraints, give them. The better the context, the better the exploration.

The skill helps structure the process.

It does not replace taste.

If you want to try it, the repo is here:
https://github.com/carson2222/skills

If there's interest, I'll write a follow-up on the exact workflow, what usually fails, and how I avoid the most common AI frontend traps.

Top comments (0)